Pages

Thursday 16 July 2015

Reporting frequency isn’t the problem. Failing to derive insight from the data is.

NHS England plan to stop releasing A&E performance numbers weekly. This is a bad mistake. The real problem isn’t inconsistencies in the way different performance is reported but a persistent failure to derive insights from data.

It looks like NHS England will stop publishing weekly performance data for A&E departments in England from next month. According to their website:

Following the recommendations from Sir Bruce Keogh’s review of waiting time standards, statistics on A&E attendances and emergency admissions from July 2015 onwards will be published monthly rather than weekly, with the last weekly publication being for week ending 28 June 2015.

The reasons, according to Bruce Keogh’s report are:

Current arrangements for reporting performance are extremely uncoordinated. Standards report with different frequencies (weekly, monthly and quarterly) and on different days of the week. This makes no sense - it creates distraction and confusion. We receive feedback that this makes it difficult for people to have one transparent, coherent picture of performance at any one time.

My recommendation is therefore that we standardise reporting arrangements so that performance statistics for A&E, RTT, cancer, diagnostics, ambulances, 111 and delayed transfers of care are all published on one day each month.

While I’m all for having a “transparent, coherent picture of performance” what is being proposed addresses the wrong part of the problem. The good way to use data is to derive insight into what the underlying problem actually is. That should help focus improvement programmes on interventions that are likely to work and away from interventions that feel good but are irrelevant to the real problem. Making A&E reporting consistent with RTT reporting isn’t going to help anyone gain insight into their problems.

How performance data is misused

To be fair to Keogh there are many ways that performance targets can be misused or can have perverse consequences.

The old targets for hospital waiting times had some very bad side-effects. They penalised hospitals for treating long waiters leaving many spending more management effort in trying to minimise the number of long waiters treated than they spent on trying to speed up the overall treatment process. This is because every treated patient who had already breached 18 weeks would count against the reported performance so it was better to keep them waiting longer and “drip feed” them into treatment at a rate that wouldn’t breach the target performance level. What a perverse waste of management time.

And there is a tendency to overreact to noise. When you have a weekly target there is often immediate pressure to “do something” when you breach the standard. But doing something without a good understanding of the causes of the problem is worse than doing nothing (a famous Deeming experiment to illustrate this is explained here). As the blogger squiretothegiants says in explaining Deeming:

The point is to understand the system and the reasons for variation. Then (and only then) you can make meaningful changes instead of merely tampering.

The performance management processes of the NHS positively encourages tampering to the detriment of effective improvement. Nigel Edwards summarised this very effectively in a rant earlier this year in the HSJ.

However, just because performance measures are often misused doesn’t mean we should change the measures, especially if they can provide useful insights into the causes of poor performance.

What insights can be derived from weekly A&E data

There are insights in the weekly data. Those insights tend to contradict the majority of popular myths about why problems exist. It is widely assumed, for example, that A&Es are being swamped by a wave of demand. Or they are taking up slack from people who should be treated by GPs but no longer are since they abandoned out of hours provision. Or that too many of the wrong sort of patient with trivial injuries are turning up. And so on.

None of these popular ideas are compatible with the basic simple statistics of the weekly A&E performance reports.

For a start, there has been no sudden change in the number of people attending A&E. Here is a chart showing the weekly attendance and performance for each type of department.


There has been no sudden change in attendance (in fact, though the weekly data hasn’t been publicly released, this is true over the last two decades in major A&Es with steady growth of 1-2% per year overlaid on a noisy pattern week to week with summers being busier than winters). It is also worth noting that all the problems with performance are in major A&E (type 1 departments). The opening of many new walk in centres and minor injury units has generated new demand but has had no notable impact on attendance or performance at the major departments.

Here is a more detailed illustration of the recent trends from hospitals in the Manchester area.


The lines here show the weekly attendance versus the average week in 2011. The numbers at the end of the lines show the latest quarter versus the average in 2011. It is worth noting that two out of eight trusts have seen volume falls.

More significantly, if you compare the last chart to the weekly performance shown below, you will see that there is no notable relationship between the volume and performance trends (note that Salford Royal has the largest increase in volume but the best and most consistent performance). The numbers to the right are the latest quarter volumes versus 2011 as in the previous chart.


The lack of any relationship between volume and performance is one of the clearest results emerging from the analysis of weekly data and one of the most ignored in policy. Probably the majority of money spent in the last few years to avert the regular winter crisis has been spent trying to divert patients from A&E. But the weekly statistics show that it isn’t the attendance that is the problem. But there is a significant relationship between performance and the number of admissions. The chart below shows both attendance and admissions against performance for all major A&Es in the weekly dataset. Each dot is a single week.


This relationship strongly suggests that the core problem in A&E performance is focussed on the group of patients who need a bed. Other evidence (eg from analysis of HES data which allows us to see how long different types of patients wait for treatment, discharge and admission strongly supports the idea that the core problem is related to admission). This might suggest that there has been a significant change in patient morbidity, though analysis of A&E HES data (eg this from the Nuffield Trust) hasn’t confirmed this idea. Unfortunately admission thresholds in trusts are fungible and not a good indicator of whether patients need to be admitted.

Whatever the reason for the increasing number of admissions, even the weekly data clearly points to problems admitting patients quickly being the core issue. Moreover, the problems are entirely inside the major A&E departments and not in other parts of the system like walk in centres (indeed, a key message is that diverting patients away from major departments is irrelevant to system performance negating the supposed benefit Keogh expects from a focus on whole-system metrics). Investing effort elsewhere is a waste of effort and money. Unfortunately, despite this clear message from the weekly statistics, this has not been the focus for action.

So What?

The key point I want to make is that the weekly data is a rich source for testing and monitoring ideas for improving A&E performance. But the key messages have mostly been ignored. Far too much of the performance improvement effort has focussed on the noise in the weekly data (as Nigel Edwards pointed out) and far too little on the long term patterns revealed by looking at all the data.

The idea that we should look at only monthly data to help people see “whole system” performance is a mistake as it makes it harder to see these key patterns that clearly point to the problem being in a particular part of the system and not in the system as a whole. The problem isn’t inconsistent ways of reporting performance being confusing; it is that the clear messages in the data have been ignored leading to improvement initiatives driven by anecdote not analysis. We have, as a result, invested a great deal of money in initiatives that were never going to work.

The few independent analysts (eg @GMDonald whose interactive analysis of the weekly data is on Tableau Public here) who have sought to look for the key patterns in the data will no longer find those patterns so easy to reveal. And even if the leadership in NHS England found a new capability to seek insight from data, they too would now find that insight harder to come by.

The problem has never been that the weekly data is misleading; it is that it clearly leads but few have ever bothered to follow that path.

Monday 13 July 2015

What the NHS can learn from the Battle of Britain

Britain’s great national institution, the NHS, can learn form Britain’s iconic national battle but only if we understand what really happened and ignore naive popular myths.

The 75th anniversary of the Battle of Britain found me re-reading one of the most insightful histories of the battle. I came away with some lessons that apply to the current situation in the NHS. The core lessons relate to how the RAF won the battle and how different the reality is to the popular myths that form a dense fog of comforting but misleading confusion.

The NHS is also beset by myths about how it works and what is wrong with it. We imagine heroic doctors and nurses battling a stifling, politicised bureaucracy to offer the best care in the world to patients and therefore suppose that more front-line staff, less political interference and fewer managers would put the system to rights. Unfortunately, much of this is wrong and, more importantly, none of it will help us choose which actions will preserve and improve the NHS over the next decade.

Learning the right lessons from the Battle of Britain might.

The Battle of Britain: myth and reality

The popular myth surrounding the Battle of Britain has many elements. One is that Britain fought heroically against overwhelming odds and a methodical and well-prepared enemy. Another is that the RAF’s success depended on Britain’s possession of superior technologies (like the Spitfire or Radar) developed by a small number of scientific geniuses. Yet another is that success was due to the supreme heroism of a small band of fighter pilots who bravely defied the well-organised teutonic hordes.

There are snippets of truth in the myth (without them it would not gain any traction in our imaginations). Mostly, though, the myths mislead and teach us nothing useful. But a good understanding of what really happened might.

The source for the real lessons is Stephen Bungay’s masterful history of the battle: Most Dangerous Enemy: A History of the Battle of Britain.

Bungay sums up one of the problems about the belief in the traditional myths in his summing up:
These elements of the myth have had a nefarious influence on the way the British have understood themselves. They suggest that we came through by the skin of our teeth, because of innate superiority and divine providence (usually called ‘good luck’). These things saw us through then, so it must be all right to rely on them in the future. By the same token, if everything starts to go wrong, there is not much we can do about it. Either we are in fact worthless after all, or the gods have deserted us. So we either run ourselves down or blame circumstances outside our control. Both of these reactions are counsels of despair and both are based on complete fantasies. The way in which the Battle was actually won offers us far more worthwhile lessons, both for the present and the future.
I’ll go through just a few of the key myths and then look at what actually happened to draw some lessons for our current situation in the NHS.

Many assume that the technological superiority of the Spitfire and Radar, the inventions of British technological geniuses, were key to winning the battle. But the Germans had Radar first and, while superior the a BF109, the Spitfire was not superior enough to be decisive by itself (besides Hurricanes shot down more Luftwaffe fighters).

The idea that Britain was unprepared for the battle and somehow muddled through via the efforts of a small band of RAF heroes is also false. Bungay reports:
...Fighter Command was the best prepared fighter force in the world, by a considerable margin.
Understanding why and how this was true is critical to learning anything useful from the real history of the battle. Again Bungay’s words provide a pithy summary:
In their ‘finest hour’ the British behaved quite differently from the way in which they usually seek to portray themselves. They exhibited a talent for planning and organisation which, in its Teutonic thoroughness, far outstripped that of the Germans. They left little to chance, planned for the worst case and did not rely on luck. Given all this, it is hardly surprising that they won. It is, on the other hand, quite extraordinary that they should imagine they could have won by doing the opposite.
The truth is that the “secret weapon” that defeated the Luftwaffe was superior organisation of the resources available. Bungay again:
Churchill’s mythologisation of ‘the Few’ has led to a general belief that they were a superior breed to their German counterparts. They were not. The key difference in military performance between the antagonists was not in the pilots. The distinction which really made the difference was in the leadership.
It is worth reading Bungay’s complete book for the details (and it is, for a military history, surprisingly readable and also remarkably insightful). But I’m going to concentrate on just one aspect and one which even now 75 years later has significant lessons for how to achieve success when faced with major challenges.

The key to the success of the RAF was an organisational system for managing the fighting. This isn’t the sort of topic that makes for grand motivational speeches which is why Churchill didn’t dwell on it. The reason why Radar was so useful wasn’t because it was a great British invention that the enemy didn’t have:
The Germans knew all about radar and used it themselves to great effect. They had in fact invented it first. It was in the application of radar technology to create a command and control system that the British were pioneers.
It wasn’t having Radar that mattered; it was knowing how to use the information it provided to direct the fighting. The secret wasn’t the technology: it was knowing what to do with it. It was superiority of organisation, not technology.

At the centre of the process was an idea still relevant to modern organisations: a place where all the relevant information was collected and analysed. In the RAF’s control room there was a masterful data visualisation (and this in a world without computers):
The plotting system was a masterpiece of graphic design. Using the simplest of methods, it showed at a glance the deployment of forces in three-dimensional space and used colour to convey the dimension of time.
Radar and operational data were brought together in one place so the RAF could understand what was going on and deploy its forces to respond.
It brilliantly solved the problems of dealing with massive amounts of data from a wide range of sources in a very short time and using it to exercise control over the fighting. It was a system for managing chaos.
It was this coupled with other innovations in how to organise fighters that won the battle.

So what? How is this 75 year old story relevant to how we run the NHS?

It is widely assumed in popular debate about the NHS that all that matters is how many front-line staff the system has and how motivated and heroic they are in pursuing their work. Management is just something that exists to please the bean-counters and otherwise just gets in the way of the real purpose. Politicians pander to this public belief by offering to sack more managers and put more staff on the “front-line”.

We have also started assuming that the challenge of future productivity can be met through the deployment of clever new technologies like personalised apps or genomics. Somehow they will enable the system to bridge the £20 billion productivity gap that threatens to overwhelm the NHS with demand it can’t possibly meet.

Both these mythological beliefs have parallels with the Battle of Britain.

Had Churchill’s actions been determined by his myth-making rhetoric about brave fighter pilots, he might have starved the leadership of the RAF with the people necessary to manage the battle in favour of more “front-line” pilots. Luckily for the fate of the free world he knew the difference between the rhetoric and the reality. The RAF got the balance between fighting resources and management resources right. Few modern politicians running the NHS have shown any such insight.

The NHS is starved of up-to-date, reliable information that could be used to manage the system more effectively. It spends less on the sort of IT that would make it easy to record and analyse information effectively than most organisations of any sort. It has the lowest ratio of actual managers to staff of any large organisation on the planet. And many of the managers are not actually managing but are overwhelmed by the corrupting reporting of irrelevant information by a political leadership that doesn’t understand how to make improvements happen (this brilliant rant from Nigel Edwards sums this up in a typically robust way).

And the strategy for IT in the NHS is disturbingly about fancy new technologies creating miraculous improvements rather than fixing the mundane and unsexy practical problems of collecting and using information that cripple the current system. This, too, has a parallel with the German strategy during the war. German researchers produced a number of astounding advances for the air war, some many decades ahead of their time. The first jet-powered fighter, the ME 262; the first unmanned drone bomb, the V1; the first ballistic missile, the V2. They produced a great deal of false hope for the Nazi hierarchy but made bugger all difference to the outcome of the war.

The lesson the NHS needs to learn is that myths are not a good basis for deciding what to do.

The reality of the Battle of Britain is that it was won by superior management of resources not by clever technology or a small band of heroic, motivated individuals. It was won by better deployment of the resources available. This was, in turn, driven by a system designed to deliver better information to the leadership so that they could make the deployment decisions and learn which tactics were effective faster than the enemy.

Maybe this sounds dull and prosaic. It doesn’t scale the rhetorical heights of Churchillian prose that motivated the nation during the early days of the war. But doing the boring, behind-the-scenes organisational stuff is what won the battle. The heroic front-line fighters would not have won if they were badly organised and badly deployed.

If the NHS is not going to be overwhelmed by growing demand it needs to learn this lesson.