Taking stock with Gareth Parry

Content

We were delighted this week to welcome Dr Gareth Parry to Haelo HQ.

As Senior Scientist at the Institute for Healthcare Improvement (IHI), Gareth is leading the application of a Rapid Cycle Evaluation system.

Taking time out from his busy schedule, we were able to catch up with Gareth to discuss some key topics about evaluation:

Gareth Parry

What is evaluation?

Evaluation to me is really about learning how, where and why something works, but I’ve also come to realise it can mean very different things to very different people.

Some people often think that evaluation is about asking whether something worked, where the answer is either yes or no. Evaluation can also be used to answer broader and more practical questions like how does something work, where does something work, and how can we make something work better?

Why do we evaluate?

I tend to evaluate because I want to know where something works, and I want to know with what impact. The reason I want to find this out is because in the future, assuming that what we do is scaled up to other places, it would be good to give people an idea of

– what improvements are likely to occur

– what the impact is going to be, and what’s going to work or

– what we’re likely going to need to amend so that it will work.

Is evaluation in the world of improvement different compared to the world of research?

I don’t think the two things are that different actually – When people think about doing research there’s a research question, and when we think about doing evaluation there ought to be evaluation questions. Those questions from a research perspective or an evaluation perspective often go hand in hand.

Often I think of evaluation as a sub-set of research. People in research generate new knowledge, where evaluation is about thinking about how and where new knowledge may be applied.

Can an evaluation novice do it in their improvement project?

Yes, because at the heart of quality improvement is a learning cycle –the Plan Do Study Act (PDSA) Rapid Cycle testing approach.

You will plan something, maybe an experiment, you do it and plot the data which you then study what happened and decide what you’re going to do next.

And when you’re doing evaluation, it’s almost exactly the same thing. Where we plan the evaluation question, we study and decide what we’re going to do next, in an identical approach. So to me, it is the same thing.

Gareth Parry speaking at Making Safety Visible summit in October 2015

Gareth Parry speaking at the Making Safety Visible summit in October 2015

What are your top evaluation tips for people working in improvement?

As we’ve been thinking about evaluation at the IHI, we’re talking about getting people to work on ‘Five Core Components’ to describe what an improvement initiative is:

The ‘Aim’ – What are you trying to achieve, what’s the aim?

The ‘What’ – Be clear about what it is that people at points of care will do – what changes they are going to make, usually illustrated with a driver diagram.

The ‘How’ – Explain how the improvement is supposed to work. Looking at what improvement methods we may be teaching, what is the result is from those activities, and how’s that going to help people put the changes in their driver diagram into place.

The ‘Measures’ – A data measurement plan. What data are you going to get to understand whether people are progressing towards their aims and goals? To see whether the changes you make lead to improvement.

‘Dissemination’ – And finally, also think about what a spread and dissemination plan looks like because we want to make sure the improvement work is captured as we go. We want to understand what’s changed as we go along because, if we’re going to spread and scale up our work in the future, that dissemination plan is really important.

We need to communicate what we’ve learnt, so others can apply in their own settings.

If you want to hear more about the 5 core components, you can watch the IHI film:

 

Where do you see QI and evaluation in the future?

I think there’s a much greater understanding now of how the worlds of evaluation and quality improvement two things should come together, or can come together. With formative evaluation especially, where people do stop and pause to see if what they predicted happened or not using data, updating what happens as you go.

This approach goes hand in hand with quality improvement methods and techniques, I think those two things will come together more and more and I can see it happening already.

I think the more it happens, the better for both the evaluation and the quality improvement field.

There has often been disagreements between those leading improvement initiatives and evaluators on what the impact has been. This has often been due to confusion about what the improvement initiative was. Moreover because improvement initiatives use rapid-cycle testing, initiatives often change and adapt over time, catching more orthodox evaluators by surprise. If evaluators and improvers can communicate more, and apply approaches that are adaptive, then more practical and relevant evaluations can be conducted.

 

Thank you Gareth! We look forward to seeing you again soon.

What do you think?

Leave a comment below

Your e-mail address will not be published. Required fields are marked *

2 Comments

  1. […] We were recently joined at Haelo HQ by Dr Gareth Parry, Senior Scientist at the Institute for Healthcare Improvement (IHI), Gareth is leading the application of a Rapid Cycle Evaluation system. Our Communications Officer, Vicky Grimes, was able to catch up with Gareth despite his busy schedule. Read Vicky’s interview with Dr Gareth Parry here. […]