Welcome to the event blog for our new Improvement Science for Academics (IS4Ac) cohort 4 Workshop 1 held at the iconic Imperial War Museum, Salford Quays, (7th – 9th June 2017). Jess Roberts, IS4 Project Manager, reviews the three day event.
IS4Ac is Haelo’s flagship improvement training programme for academics, developed and delivered in partnership with the Manchester Academic Health Science Centre (MAHSC) a bespoke team based training programme in improvement science that includes a blend of web-based learning, improvement coaching and site visits.
Eight teams attending today successfully completed the application process to participate in the Breakthrough Series Collaborative. Over the next 12 months, teams will attend three workshops, work within three action periods before attending a summit event at the end of the course.
— Kurt Bramfitt (@KurtBramfitt) June 7, 2017
Professor Maxine Power, Haelo Chief Executive and IS4 expert faculty member opens the event. After guided icebreakers, Maxine moves on to talk about the origins of quality improvement (QI) in practice. She asks the teams what they would like to achieve as part of participating in the programme and outlines the importance of a common understanding of QI. She talks about the difficulty of beginning a QI project due to lack of support and the importance of learning a set of tools and techniques to use. That’s the goal of these workshops: to give participants the toolkit and the support to deliver their QI project.
Maxine uses the example of Ignaz Semmelweis, who noticed that there was a high rate of deaths in women post-partum in a particular ward in mid-nineteenth-century Vienna. The post-delivery mortality rate of the women on the medical student’s ward was 13-18% while that on the ward attended by midwives was only 2%. Why might this be? The answer was sanitation. The medical students often attended the women having come straight from dissecting the dead. On noticing that a colleague with an open wound caught a similar fever to the post-delivery women, Semmelweis introduced chlorine hand-washes, which saved many lives. However, Maxine points out that Semmelweis’ study was ignored by the medical community and his contract with the hospital was not renewed. This is the problem with any QI project: what has been henceforth known as the Semmelweis reflex, or, the natural rejection of new ideas and processes.
After this case study, Maxine illuminates the correct way to proceed, using the work of Juran, Shewhart, and Deming to demonstrate the way forward for any QI project. The way to get your point across, Maxine posits, is by presenting your evidence base and data, but also using patient stories to humanise the face of your QI project and appeal to hearts as well as minds. To help our teams to do this, Maxine introduces the models and paradigms that are used in the workshops to organise their thinking and present their ideas in the most effective way. It’s an inspiring start. Read Maxine’s blog What is Improvement Science anyway?
Kurt Bramfitt, Haelo Senior Improvement Advisor and IS4 Course Director is up next for an introduction to Deming and the Theory of Profound Knowledge. This is a nuanced paradigm that takes into consideration the idea of the organisation as a system using a fun example where our teams think of a seemingly random number, then go through a series of steps to end up with a colour, animal, and country. Most of us in the room end up with the same three answers, in spite of having picked different initial numbers! This is an example of a system that works to produce a single outcome. Deming’s model also considers the human side of change. Linking back to Maxine’s concept of psychology of change, Kurt underlines the importance of understanding motivation and human variation when you undertake an improvement project.
— Haelo (@_Haelo) June 7, 2017
Theory of variation is up next: this is the idea that variation is either due to a common cause or special cause. Common cause variation is inherent and natural in the system, but special cause variation is when issues arise due to external events or specific circumstances.
Finally, the model is completed by theory of knowledge. This is where the Model for Improvement and the PDSA cycle are first introduced. Kurt then talks through the Model for Improvement, leading to the development of team aims and driver diagrams. To demonstrate the importance of working to the same aim, Kurt asks the teams to write down their goals individually and then compare the results. We find that within some teams, aims do not match at all! Good aims, Kurt says, must be specific, measureable, and timely.
— Rachel Volland (@RachVolland) June 7, 2017
After some final feedback, Maxine gets up to tell a story about how and why improvement projects fail, and what to do to increase our chances of success.
— Elderly Care MRI (@MRI_ElderlyCare) June 7, 2017
— Binita Kane (@BinitaKane) June 8, 2017
After a packed day of learning yesterday, our teams are back in the room for day two. We start with reflections from the teams on yesterday’s learning experience, and then Kurt Bramfitt is back to talk about measurement for improvement.
— Haelo (@_Haelo) June 8, 2017
Measurement shows us important information about our improvement project such as whether we have actually reached our goals and how we know that the change we have implemented is actually an improvement. Kurt also returns to Maxine’s key point from yesterday: convincing people that there is a problem is a major hurdle in any QI project. The data must be collected and presented in a way that confirms that there is a problem and that something can and should be done to change the situation.
Kurt also discusses the differences between traditional research and improvement science research. Instead of collecting “just in case” data, in improvement science, we collect “just enough” data to create small, sequential samples. Unlike in traditional research, our hypothesis is likely to change during the small rapid data cycles that we undertake. We also tend to display our data using run charts or SPC charts, which will be unpacked in detail later.
There also key types of measures that we use in improvement: outcome, process, and balancing. Linking back to yesterday, Kurt invokes system theory: we cannot change any one part of the system without having an impact elsewhere. Balancing is essentially checking that we are not impacting something else by introducing our change, leading to unintended consequences. The effect may be positive, but it can also be negative, so it is also important to keep an eye on these measurements.
The teams are now given some time with the faculty to develop their measurement strategies. Faculty members Kurt, Judith Strobl, and Rachel Volland are on hand to visit teams and help them to refine the details regarding the data they want to collect and how they are going to collect it.
— Kurt Bramfitt (@KurtBramfitt) June 8, 2017
Nick John is a Lead Data Analyst at Haelo, and in the next session he talks about the visual display of data.
— Haelo (@_Haelo) June 8, 2017
Nick takes us through the “seven deadly sins of data visualisation,” which all focus on cluttered, imprecise, or misleading presentation. Our charts should be easy to read, tell an accurate story of events, and contain a level of detail appropriate to the subject matter. This reflects Kurt’s example yesterday of trying to use a globe for directions instead of a street map. Nick concentrates on the display of data over a reasonable timescale in run charts. Most of the participants are familiar with this type of chart, and the group easily identifies the median line.
The teams then put their knowledge to the test when they are given a blank axis and some points to plot, before calculating the median line and extending the chart into the future.
Now that we have created our visualisations, we can see some strange results, which Nick clarifies by annotating the chart with some important information regarding what has been going on in the situation where the data has been collected. This is one of the keys to effective data presentation: notes that explain any bizarre points on the chart. Nick hands out a run chart quiz for the teams, before explaining the finer details of run chart rules so that the teams can compare their charts against the rules.
Maxine then underscores the importance of community ownership of an improvement project. Aims should be set during a conversation between everybody who will be involved.
Maxine shows us her driver diagram and her measurement processes on the C.diff project, emphasising once again the concept of appreciation for the system, and being aware that a change in one place often has unintended consequences in another. She also returns to the psychology of change: understanding why a situation occurs is absolutely essential if we wish to change it.
Day 2 comes to a close with Kurt asking the teams to share their “aha” moment of the day. Teams praise the input from coaches on measurement strategies, and there’s an enthusiastic consensus that Nick’s straightforward approach to data visualisation was illuminating and set anxious minds at ease.
— Keeley (@keeleydavies5) June 8, 2017
Welcome to the final day of IS4AC, workshop one. Today focuses on theory and ideas for change. After a welcome back from Kurt Bramfitt and Judith Strobl, we begin today with our two final team presentations.
The teams are now given 45 minutes to refine their driver diagrams and measurement strategy. We find that coming back to these key models periodically means that we can see how our ideas are evolving over the workshops.
Kurt also talks about Utopia Syndrome, where too much time and effort is spent trying to craft an ideal form of change. This often leads to paralysis: nothing actually gets done! Our method in improvement science is to implement multiple, rapid cycles of change. We may make mistakes but in the process we quickly discover what doesn’t work and adapt our thinking. Kurt discusses the difference between reactive (first order) changes and fundamental (second order) changes: or, those changes that we make in response to an immediate problem, and changes that aim to change a system or process so that it works better or more efficiently.
Teams are given some masking tape and a blank axis in order to demonstrate the use of PDSA cycles. We must sweep a coin across the table from one masking tape line to the other, aiming to get the coin as close to the second line as possible. We are set the challenge of providing a 50% reduction in variation from the target. We must document our PDSA cycles.
— Haelo (@_Haelo) June 9, 2017
— Jessica Roberts (@spangledpirate) June 9, 2017
The exercise has made teams consider their own data capture and presentation plans.
After lunch, Judith summarises our learning, particularly the importance of knowing exactly what it is we are testing, and completing the “study” and “act” portions of the PDSA model. She gives the teams some time to formulate the first part of their PDSA cycles.
In the final session, Kurt takes us through the next steps for our first action period including monthly reports, web exes, and site visits.
#IS4AC great QI course I did it 2013 enjoy !
— karen kemp (@karenkemp46) June 9, 2017
We will be back on 4th-6th October for workshop two.
Leis JA, Shojania KG. A primer on PDSA: executing plan–do–study–act cycles in practice, not just in name. BMJ Qual Saf Published Online First: 16 December 2016. doi: 10.1136/bmjqs-2016-006245.
Power Maxine, Wigglesworth Neil, Donaldson Emma, Chadwick Paul, Gillibrand Stephen, Goldmann Donald et al. ‘Reducing Clostridium difficile infection in acute care by using an improvement collaborative.’ BMJ 2010; 341 :c3359.