Designing reliable systems

By Jeroen van Dalen

Beginning of the financial year 2019-2020, we set out a goal: to evaluate all programs at participant level. We wanted to be able to compare programs and projects across the business to understand questions such as: Which services do our clients love most? How can we do better next time?

Sounds pretty easy. I mean, how hard can it be to send an evaluation?

Well, turns out, it is hard. It took us nearly nine months to get to today where we are currently sending post program and post session evaluations to (nearly) everyone that finishes an Integral program.

I thought I’d share some of the lessons we’ve learnt and considerations on designing robust systems and processes, and our results so far. 

xavi-cabrera-kn-UmDZQDjM-unsplash.jpg

Early results 

We are currently sending out evaluations to at least 95% of workshop and coaching participants. We can compare the results between consultants and types of programs (workshops and coaching) as well as for delivery (digital and face to face). 

With 170 program evaluations (many more session evaluations), over the last 3 months, 51% of people are what we call net promotors (with a score of 8.6). This is considered “excellent” according to benchmarks, and we couldn’t be happier and prouder.

But! We are also getting useful feedback on what we can improve via open comments, so we are confident that we can do even better in the future. 

1. Simplification

First, we had to design the evaluation process. Our first take on this process had over 10(!) different evaluation methods and surveys (very messy!) and many more versions (20-30 questionnaires in total). This included:

  • 3 post program evaluations

  • 3 post session evaluations

  • Numerous skills assessments and outcome assessments 

  • Multiple 360s

The more complicated a process, the harder it is to implement reliably. Not only can a lot go wrong with 20 evaluations methods, but there will also be confusion on what evaluations to send, when to send it and why. 

This had to be massively simplified. We redesigned the process and decided to roll out new evaluations in three phases, with phase one including only 2 evaluations with 2 versions. These were: 

  • Post program evaluation (1 for coaching, 1 for workshops – with 90% overlap between them) 

  • Post session evaluation (1 for coaching and 1 for workshops – with 90% overlap between them)

2. Start small & scale up as you learn

We used the following scaling-up method:

1.     Proof of Concept (days): Trial one evaluation form for one program. 

2.     Pilot (weeks): Trial one evaluation form, for a handful of programs (all coaching). 

3.     Most Valuable Player (months): Roll out evaluations for both coaching and workshops (30-60% of programs).

4.     Production (years): Automated sending of evaluations at the end of programs. 

Dividing the work like this is necessary, as you can’t roll it out for everything in one go. It also gives you time to stop, learn and iterate as you go. And we definitely learnt some lessons…

For example, it turns out we previously didn’t collect email addresses of workshop participants, thus we could not send evaluations to any of the workshop programs. To capture these, we had to update our customer intake process and re-design our sign-up forms.

To begin with we designed a process that gave us emails for half the participants for some of our programs – not good enough – but eventually we completed a process that gets us 100% of participants’ emails addresses for 100% of our programs – a necessity for us to reach our goal. 

3. Automation

Human vs. Computer – who wins the following battles?

  1. Copy data without making a mistake from one document to another.

  2. Send an email to people who finished a program (without forgetting).

  3. Design a tailored leadership design program.

My bet for # 1 and #2 would be the computer, and the humans on #3. Automation doesn’t work everywhere, but it is really good at doing things consistently and repeatably without too much variation. Want to send an evaluation to every participant after the last session? That’s the perfect job for a computer, and that is why we have currently automated this. 

4. Communicate, communicate & communicate… but not always

About 5 months into this project I got a question (with some frustration in the tone): “Why haven’t 360s been implemented for every program?”. It was clear that the person who asked the question thought this was part of the evaluation project (it was), and that it should have been implemented (it shouldn’t). 

So, what went wrong? This bit of information was communicated 3 times in 3 different mediums, was that enough? Maybe, but it was not clear enough, and people had heard this, but not understood it. Not enough effective communication on my behalf, and no checking if the message was received correctly. 

On the other hand, we tried to involve everyone in this process, the result was that progress was very slow, and we ended (see lesson 1) with a LOT of variations and opinions on how to do it. We “limited” the group of people involved for a while (to only 2) to pick up speed and reduce complexity until the project was in calmer waters, and then we opened communication again. 

Previous
Previous

Connect Four

Next
Next

Build Better Learning for Everyone