Pages

Wednesday 14 December 2016

The way the NHS measures average bed occupancy doesn't support effective solutions to the shortage of beds



News headlines today (the BBC's on increasing occupancy of beds and The Times on nighttime discharges) reflect a real problem with bed occupancy in NHS hospitals. But the metric on average bed occupancy doesn't measure what it claims to measure and actively distracts from practical solutions that would improve the system.

The NHS has been collecting data about bed occupancy and availability for a very long time. But just because the statistic has been around for a long time doesn't mean it is useful. Sure, it measures something, but whether that something helps the NHS do a better job is highly questionable.

I first came across the metric in the early 2000s when I was working on problems in A&E departments and realised that finding a free bed was one of the biggest barriers to quick treatment. It still is. What I discovered was that the way the metric is measured is about as useless as it is possible for a metric to be. It not only doesn't help solve real problems, it actively drives people to suggest the wrong solutions.

The trouble is that what we need to understand is why beds are hard to find at the particular time of day and day of the week when they are needed. Peak arrivals at A&E, for example, usually occur between 9am and 10am. So the demand for beds for the 1 in 4 patients in A&E who need to be admitted peaks sometime before lunchtime. That's when we need the beds at least for the uncontrolled flow of emergencies. (In principle, hospitals can control the timing of the flow into elective beds though many don't.)

But the bed occupancy metric doesn't tell us about the availability of beds at the point when they are needed. Nor does it tell us about the average occupancy across the day or the week. It tells us about the number occupied on a particular day of the week at midnight. When I first started working for the NHS I expressed astonishment that the statistic was so irrelevant to the real problem of finding beds when you needed them. I was told that such a long standing practice could not be changed.

The reason why the metric is so useless in practice isn't hard to understand. In a typical DGH with 500 beds, each day will see somewhere between 75 and 100 discharges. In many hospitals those discharges typically happen in the afternoon, often late in the afternoon. This doesn't match the demand for beds which is dominated by emergency admissions which peak in the morning. It isn't helpful to know how many beds are free at midnight: we need to know how many are free every hour of every day.

If we focus purely on the published metric the only way to fix a lack of availability is to add more beds and hope discharges don't become any less disciplined (unfortunately there is plenty of evidence that things will get more relaxed and the beds will fill up with patients who should have been discharged more quickly). If we focus on the pattern of arrival and departure across each day we can see better ways to create space for emergencies. I supported the Department of Health to develop a Bed Management Toolkit in 2007 that recommended a focus on doing as many discharges in the morning as possible (this is still part of good practice recommendations now). If a good proportion of the 75-100 patients are discharged in the morning, there will be plenty of free beds for the emergency admissions. If they stay in their beds all day awaiting slow processes to get them out (like prescriptions for take home medication or discharge notes) then the hospital may well find itself running out of free beds early in the afternoon even though it will have free beds later in the day. Patients in A&E will spend a long in an environment that isn't the best place for their care. There is plenty of evidence that small changes in discharge practices can make big differences to bed availability at the times of day when beds are needed.

In the hospitals who do collect real time bed utilisation, this pattern can be seen and managed. Surprisingly, many can't even collect this data and many who can do nothing to ensure that the data is collected reliably. Others do collect it and do nothing with it.

My main point is that the national bed occupancy metric tells us nothing useful about the problems many have finding beds at the time of day when they are needed. Worse, it tends to lead commentators to demand a major increase in bed numbers, which is both unrealistic and could only happen slowly, rather than a focus on the effective management of discharges, which could yield benefits tomorrow at minimal cost. There are hospitals who genuinely need more beds in the medium term, but a failure to manage discharges effectively makes their problem much worse right now.

The NHS could argue that the problem is outside its control because many patients can't be discharged because of a lack of social care capacity. It is true that this is a big and growing problem. But it isn't the biggest problem. Audits of clinical notes of patients currently in beds usually show that between a quarter and a half are fit to leave hospital. And while perhaps a third of those are stuck because of external problems the rest are stuck because the hospital hasn't got its discharge act together. But, what the hell, it is far easier to be able to simply blame others than it is to do the hard work required to redesign processes inside the hospital.

But back to my main point. Average occupancy isn't very much help and the nationally reported metric doesn't even measure average occupancy. Hospitals need to understand real-time occupancy every hour of every day if they are to have any hope of managing the availability of beds at the times of day when beds are needed. Good systems to manage bed occupancy can lead to major improvements in bed occupancy at the points of the day when it matters and, as a direct result, will dramatically reduce long waits in A&E. This involves understanding of the pattern of demand across the day and a disciplined approach to discharges that achieves much better coordination of departures with the pattern of demand.

If the NHS continues to focus on a bad way to measure the wrong thing about beds it won't get the insight it needs to drive real improvement.


Monday 24 October 2016

NHS problem solving is broken

The NHS has plenty of problems to solve but it often tries to short-circuit effective solutions by not bothering to check whether the problem has been correctly understood or the solution is likely to work. The system's pathological hostility to data makes this worse. As a result the NHS devotes a great deal of effort into futile actions which demoralise staff because they don't work.

I first started working for the NHS in 2002 when the 4hr A&E target was originally set. I helped compile the national performance reports to monitor progress and I designed some of the first analyses that used patient-level data to support improvement in individual departments. I learned a lot about the problem and what solutions were effective.

In those bad old days treatment was often unbearably slow and sometimes unsafe. There were plenty if ideas about why A&E was so often slow and even more ideas about how to speed the typical A&E journey up. Most of them were wrong. I could tell that because I looked at the data.

Some people thought that the target was a bad idea. It grated with the belief in many doctors that the sickest should get the fastest treatment. Many thought it was impossible. But it proved to be very possible and it also proved to be good for patients. Recent evidence has confirmed that long waits kill: speedy A&Es are not just a patient-pleasing idea, they actually save lives.

Nearly a decade and a half later discussions about the latest A&E performance crisis (we are not yet quite as bad as we were before the target was set, but progress towards that ignoble goal is rapid) are still mired in many of the ideas that were shown to be wrong when I collected the first data. Political debate, medical discussions and improvement plans are still full of zombie ideas that some of us already know won't work.

It isn't just A&E. Zombie ideas are more or less an SOP in many strategic plans across the whole NHS.

This is maddeningly frustrating for those of us who are used to applying evidence to improving the system. Why is so much effort going into ideas that won't help?

I've got a theory: the NHS doesn't know how to solve problems. Instead of doing the hard work of diagnosis and testing, it leaps to implement solutions purely on the basis of their superficial attractiveness. It seeks magic bullets. It focusses on symptoms not causes. It fails, repeatedly.

The diagram below sums up part of the problem. It is based on some thinking from one of the few good books on business strategy (which is, after all, a problem solving process): The Mind of the Strategist.

Picture1.jpg

It's author starts from the position of trying to understand why so many business strategies fail and the diagram summarises some of his thinking. The ideas are generic to any type of problem solving which is why they apply so well the the NHS.

Here is my take. The NHS is like a doctor who doesn't want to take the time to really understand what is wrong with her patient and offers palliatives to the superficial symptoms without seeking the underlying causes. Sometimes a headache is just a headache and an aspirin will cure it; other times it is meningitis or a brain tumour and a failure to diagnose correctly kills the patient. Few problems in the current NHS can be cured by the organisational equivalent of an aspirin.

My theory of what is broken explains many strategic and operational failures in the NHS and is the best explanation for why they system as a whole seems to find improvement so hard to sustain.

Dysfunctional thinking leads to poor solutions to problems

My specialist topic is A&E performance and improvement. So I see a lot of misconceptions that frustrate me. Here are some examples.

So people observe that performance is poor. They notice that their A&Es are busy. They leap to the conclusion that they are being overwhelmed by demand. So they propose solutions that are based on diverting patients away from major A&Es.

Or, they note that staff appear overworked and the department is crowded. So they leap to the conclusion that the problem is A&E understaffing (and, perhaps depending on how political they are, they blame this on government underfunding and a national shortage of A&E staff).

But these supposed diagnoses ignore other important observations. There is no relationship at all between A&E attendance and performance either nationally or in individual departments. Mostly, for example, attendance is higher in the summer but performance is worse in the winter. Some of the worst performing departments right now are places where attendance has been steady for several years.

It isn't a staffing problem either. We might have fewer staff than many departments (or the RCEM) would like but we have a lot more than we had 5 years ago (when performance was much better). In fact there are about 50% more A&E consultants than there were 5 years ago and total staffing has grown faster than attendance for some years while performance has continued to decline.

I won't quote the details of this analysis here, but it has been studied a lot. Those studies have had very little influence on the system's choices about strategy or the general discussion on the topic.

The single biggest problem causing poor A&E performance is not even inside the A&E: it is the problem of accessing beds in the hospital. This has been obvious since I first started collecting data about A&E problems. The disease is poor flow through beds. If the treatment is adding more staff in A&E you are merely dealing with the symptom and failing to address the cause of the underlying problem. It's a bit like treating a blocked toilet with some anti-constipation medicine: it's irrelevant and probably makes the problem worse.

Somehow the system keeps coming up with solutions by leaping to conclusions about the symptom and forgetting to do the hard work of identifying the underlying disease. This is exacerbated by a habit of looking only locally and not at the whole system (in a hospital each department looks for its own internal problems but forgets it is part of a system where local initiatives may affect other units and other units may be the cause of its problems).

Big system-wide initiatives seem to suffer from the same problem. When hospital costs are too high the idea of mergers often arises. This seems to be driven by the belief that economies of scale are a major driver so bigger should be cheaper.

There are two problems with this. One is that it doesn't seem to work (few mergers seem to have delivered notable cost savings and many have made things worse). The other is that economies of scale may be all that some economists can imagine, but they seem to only account for about 10% of the differences among hospital costs. But that second observation requires a great deal of analysis and number crunching which isn't much of a part of NHS strategizing.

One thread running through many improvement initiatives is the need to be seen to do something. The NHS would do better to stop doing something until the correct problem has been identified or there will be no energy or resources left to do the right thing.

So what?

The NHS needs to stop leaping to the first solution it thinks of. If the system is to stop wasting the investment it makes in improvement it needs to stop and think.

Look at the whole system not just the local system (so don't assume that problems in A&E, for example, are caused purely in A&E) and group the different symptoms together. And look at the data. We have known since we started collecting data about A&E performance that the overall volume of attendance is not the problem and has no relationship with speed of treatment, but both locally and nationally solutions are still being proposed that involve diverting patients away from A&E.

When you have a picture of how the whole system is performing, develop and test some ideas about what the underlying problem is. For example, many different hospital problems seem from my analysis to be caused by system flow through beds. This idea explains both poor A&E performance and poor ability to reduce elective waiting lists.

What solutions exist? How about actively managing flow across the system? Most hospitals don't even collect the data necessarily to understand how flow works or what actions can make it better. Instead of seeking that data and the understanding and insight that would come with it, most seem to rely on local tinkering (more staff in A&E, extra clinics at the weekend to clear the waiting list backlog). These won't address the underlying blockages in flow through the beds. Worse, some don't try to fix anything relying on assigning blame for the problems on uncontrollable outside factors (which is why there is so much discussion about delayed transfers of care which are a big problem cause by social care, but not the biggest problem as more discharge delays are inside hospital control).

Fixing flow isn't easy, but it is a lot harder if you don't even admit that it is the key problem. Practical solutions (like having the data to know where all patients are all the time) are hard to implement. But doing easy things that don't work is worse than having to experiment with hard things that do. Because repeated failed initiatives sap the energy of staff and waste resources, making doing something different in the future even harder.


The same general observations apply to other problems. The NHS as a whole has to learn that doing the wrong thing because you haven't spent the effort to identify the right problem and the right solution is worse than doing nothing at all.


Thursday 11 August 2016

A&E staffing and performance: don't mistake lobbying for accurate analysis of the problem


A shortage of A&E staff isn't the reason for poor performance in A&E. It never has been. Some A&Es have shortages of staff and that is a problem, but is clearly isn't the result of a national shortage of qualified staff. And the more the RCEM lobby for more staff the more they detract from accurate analysis of the real problems of A&E. Fix those and we might even fix the local staff shortages.

If you read the newspaper headlines or the press statements issued by the Royal College of Emergency Medicine (RCEM), you might believe that we are desperately short of qualified staff in A&E and that is the reason performance is current at dismal record-breaking lows. You would be wrong.

You might also believe that the reason why some A&E departments struggle to maintain safe levels of medical staffing is because there is a declining number of A&E doctors or because we are desperately short of A&E doctors. But this certainly isn't the cause of local shortages and whether we are short of the ideal number of A&E medics is certainly not the reason why performance is poor.

You might, in responding to one of my many complaints that misleading things are being said about A&E staffing which is growing faster than any other specialty, argue that attendance has outpaced growth. You would still be wrong.

Don't get me wrong, an understaffed A&E department is not good and is not safe. And there are several departments in England where staffing is far too low (they are the ones generating all the ink in headlines giving the RCEM an excuse to lobby for more doctors). But blaming those local shortages on a national shortage is misleading and distracts attention from the real problems. And blaming local shortages on "NHS cuts" or squeezed budgets is ludicrous.

For a start, the local departments who are severely understaffed have the budget for more A&E staff but they can't recruit them. That's a recruitment problem not a budget problem. You could plausibly argue that those local problems were the result of a national shortage. That's a reasonable argument but falls down because the majority of A&E departments don't have problems recruiting. People being unwilling to work for your department might be caused by purely local problems like the fact that the department is badly managed and is a really bad place to work. Arguing about the national numbers of doctors isn't going to fix that. And lobbying for more staff actively detracts from identifying the local issues that need to be fixed to actually address the problem.

And the national numbers don't show the things headline writers assume they show. Here is a chart (based on ESR data) showing the national number of doctors with an A&E specialty:

medical staffing trust.png

Total medical staff in A&E have increased by more than 20% over this period; consultant levels have risen by more than 50%. Over this time attendance at major A&Es has risen by about 12% (so staff are not being "overwhelmed by demand" as the common belief has it).

So we have more A&E doctors but do we have enough? The RCEM have a model that says we still don't. And their model might be right, but we are clearly closer to their recommended level of staffing that we have ever been so hinting that current problems in A&E are caused by an increasing shortage is just bollocks. We can have a rational debate about the right number of A&E doctors nationally but that debate has nothing to do with the local problems in some A&Es or the current performance of the system.

So why do perceptions differ so much?

One of the biggest reasons is a failure of many to grasp the distinction between demand and queues. In A&E the flow of patients into the department isn't the primary cause of the department being busy: that would be caused by the number of people waiting. But the queue of people waiting is a lot more sensitive to the speed of the flow than it is to the number turning up. When the flow slows down, the queue expands quickly and that is what the staff perceive as the workload. But they mistake this for an issue with demand or attendance (we know this isn't true as the we count the attendance numbers and they have not suddenly increased.)
There is a good reason why flow in A&E has got slower and it also explains why A&E staffing is irrelevant to the speed. Flow is slow because we can't find empty beds for patients who need to be admitted (go read some of my other analysis of this especially the one where I point out the key problem is beds). Even doubling the staff number in A&E would not make any more free beds appear in the hospital.

More significantly, the performance problems caused by poor flow through beds may go a long way to explaining why some departments struggle to find enough staff. The direction of causality is not poor staffing -> poor performance it is poor performance -> poor staffing. I've observed departments which are very crowded because of problems getting patients into beds. The A&E medics still try improvement initiatives but they don't make much difference. They can't as they don't address the bottleneck in the flow which is outside the A&E. This increases frustration and adds to the depressing environment of being in a crowded department where nothing you do makes things better. Eventually people don't want to work there any more. Hence: recruitment crisis and staff shortages. But the correct answer isn't more A&E staff: it is to address the bottlenecks in flow which are mostly outside the A&E. Improving flow makes the A&E less crowded and makes the work there less stressful.

Lobbying for more A&E doctors to fix this sort of problem is irrelevant, ineffective and a serious distraction from dealing with the real problem. That's why I object so much when the RCEM use local problems to lobby for more A&E doctors nationally.

And talking about a mythical national shortage detracts from other analysis the RCEM does, most of which is good and almost all of which is more important than the national number of A&E doctors.

Monday 6 June 2016

Ad hoc analysis of big data sources is easy if you use the right tools: an example using English prescribing data

A recent newspaper story highlighted large price increases for some generic drugs prescribed by the NHS. I was able to replicate the information in the story and explore other issues with drug pricing in a single afternoon. Here I describe how I did it. There are valuable lessons here for anyone who needs to get rapid analysis from large datasets.

Two stories in The Times (the original story, the follow up) claimed that the NHS was losing £260m a year due to extortionate pricing of some generic drugs, some of which had seen price increases of > 1,000% as their licence holders exploited a loophole in the pricing rules. I wanted to check their facts and investigate price changes in all the drugs prescribed by the NHS.

This blog is (mostly) about how I did that and I'm telling it because the lessons about how get rapid answers from large datasets are very useful in healthcare. But the NHS tends to use its vast repositories of data badly and is slow to adopt technologies to make the task of searching for insights faster and easier. What I did to validate the stories from The Times shows that, if you use the right technology for storing and querying the data, you can get almost instant insight whenever a new question arises.

The data source

The reason the question can be answered is that the HSCIC NHS Digital has been releasing the data about all prescriptions issued in England every month since August 2010 (you can find the latest month here). This data describes every prescription (not just the drug but the specific formulation) dispensed in primary care, which GP practice or clinic prescribed it and how much the NHS paid for it.

It is a large data source. Each month the raw data has about 10m rows of data in a single CSV file of about 1.5 GB. The full dataset is about 700m rows of data and takes nearly 100 GB of space to store. This summarises about 5 billion individual prescriptions. Even a single month is too large for convenient analysis by spreadsheet and the full dataset is too large to fit easily on a laptop hard drive unless you have a very powerful laptop and don't want to do much else with it. In my case it was prescribing data or several editions of Football Manager: no contest.

But the raw data isn't enough for useful analysis. Individual items (which can be drugs or a range of devices or equipment) are coded using only a 15 character BNF Code which describes the item uniquely but doesn't give much information about what it is (for example what the key active ingredient is). Prescribers are coded by a unique code that doesn't tell who they are or where they are. Some of this information is released alongside the data: a mapping of prescribers to their address is provided as is a mapping of the BNF Codes to the key ingredient.

But for convenient analysis we need to group the items together in a hierarchical way. The BNF (the British National Formulary) does this by assigning each item to a chapter, a section and paragraph/subparagraph which groups things together into meaningful categories. For example, the Cardiovascular System constitutes a chapter, Diuretic drugs a section and different types of diuretics are grouped together in paragraphs (with similar groupings for other cardiovascular drugs).

Unfortunately, the HSCIC doesn't provide an up to date list of the BNF categories. The Business Services Agency, which collects this data as a side effect of the process for paying pharmacists to dispense the drugs, does but it is in an obscure place (see instructions here) and it isn't kept rigorously up to date (so every month you need to do some manual editing when new drugs are launched and, when the BNF reorganises the hierarchy, even more work is required to tweak the mappings between old codes and the new structure). Luckily, I've been keeping my data up to date.

Storing and managing large data sources

I've been keeping an up-to-date copy of the prescribing data since I was involved in developing a tool to help medicne managers exploit it several years ago. While building that tool I explored several ways to help store and manage the data and several tools to make it easy to analyse. The combination I ended up with is Google's BigQuery and Tableau. BigQuery is a cloud data warehouse optimised for fast queries on large datasets. Tableau is a desktop visual analytics tool that works well alongside BigQuery (or most other databases).

What is particularly fantastic about BigQuery is that it allows superb analytics query speed for no upfront investment. To achieve similar speeds on in-house systems you would have to spend tens of thousands on hardware and software: BigQuery give you almost interactive analytics performance as soon as you have loaded the data. And you only pay for storage and the volume of data returned by queries, neither of which are expensive. No database tweaking or maintenance are required. And, if you drive you analytics from Tableau, you don't even have to write any SQL to get results: it is all driven by visual actions inside Tableau.

In fact the hardest and most time consuming parts of the process of managing the prescribing data is maintaining the metadata which requires sourcing and manipulation of data from a variety of other sources.

Some basics about the dataset using Tableau

My core data and metadata are already stored in BigQuery so all I need to do to analyse it is to connect Tableau to the data sources and define the relationships between the core data table and the metadata tables. Tableau has a neat visual window for defining these relationships. The only other steps required for interactive analysis is to define some additional calculations for convenience. This is as easy as writing formulae in a spreadsheet. In this case I had to convert the dates in the data source (YYYYMM format) into dates Tableau can understand via simple string manipulation and I had to define some new formulae to calculate things like the average cost per prescription.

The hardest part of doing the analysis relevant to drug prices was remembering how to calculate changes in price from a fixed date in Tableau (it is easy: see here for a guide to doing it).

Once Tableau is plugged into BigQuery and those additional calculations are set up, everything else is a matter of a few minutes away. Query results from BigQuery can take ~30s on a bad day and Tableau will return tables or charts straight away which can all be modified or adjusted visually without the user needing to go to SQL or some other query language. The interactive and visual approach used by Tableau allows the user to focus on getting the right question, the right analysis and the right presentation of the data.

The analyses

The table below, for example, took a couple of minutes to generate even though it summarises all of the drugs issued since 2010 in several different ways (part of the process was to summarise everything and then eliminate the devices and dressings categories).

basic drug stats with cost.png

It is easy to create visual analysis as well as tables. This summary of monthly spend and prices took a few more minutes to generate:

monthly totals chapter.png

This is very high level summary analysis, but Tableau makes it easy to drill down to the lowest levels of detail available. For addressing the question posed by the stories in the times I needed to look at the changes in prices of individual drugs (or chemicals) over time.

It is easy to set up the analysis. I just had to create a table showing the volume and average price of all chemicals by date (this is a big table as there are nearly 2,000 unique chemicals though not all the ones in 2016 were also used in 2011). I sorted the table and selected all the examples where the price was more than 400% higher in january 2016 than it was in january 2011 (my threshold is slightly different than the times). Then I grouped the low volume examples together and got this table:

all drugs with >400% increases.png

This gives a good general overview of the places to look for big price hikes.

We can also do specific analysis of individual chemicals. The times mentioned 4 in particular. Their volume and price history is shown in the chart below.

duges mentioned by The Times.png

Again, this took only a few minutes to generate. Note the log scales used to fit the enormous price increases into a scale you can read.

Of course, we can also do ad-hoc analysis of things that the Times Didn't ask, like which drugs have seen the biggest price decreases because of the benefits of generic competition. That table is below:

drugs 4 time cheaper.png

The NHS is saving >£70m every month for this list alone (and if we took rising volume into account the savings would be even bigger).

If you pick the right tools for analysing big data, you can spend time focussing on the questions

It took me an afternoon to replicate the Times analysis and to go much further. Admittedly, I already had the data and platform for analysing it. But this is much faster than can be achieved by any tool provided by the NHS. This is strange because a number of key NHS improvements depend on good analysis of this dataset. The Times highlighted where it is useful in controlling spending by highlighting excessive pricing based monopolistic positions from suppliers. But the same data can highlight who prescribes too many antibiotics, those who don't give their patients the right mix of diabetes medication or where modern alternatives to warfarin are being used.

Making the data easy to work with is an essential part of making it useful. Platforms that allow rapid answers to be derived from the complete dataset are far better than tools that allow only local or partial analysis. Many smart people are currently wasting weeks of their valuable time wrangling parts of this dataset to answer smaller, less useful questions. They should be applying their brains to bigger questions and the NHS should be giving them the tools to make them more productive.

This isn't happening. Ben Goldacre, for example, struggled to get NHS funding for an online tool that allows this sort of analysis to be done by anyone (this now exists in beta form but has mostly be funded by charities not the NHS).

And the NHS has many other large datasets it needs to make use of. Patient level data for outpatient, A&E and inpatient activity all exist but are significantly underused, partially because finding interesting patterns is hard and slow.

But the world of data analytics has changed in the last five years. The tools exist to take the time and the drudgery out of big data analyses. The NHS should be using them.

Thursday 21 April 2016

Gresham's Law works in health policy: bullshit squeezes out honest truthful analysis

Gresham's law is an economic idea that states that counterfeit money drives real money out of circulation. The same thing seems to happen in healthcare policy: good analysis of problems and good policy is squeezed out of the debate by bad analysis and bad policy that sounds good but won't work. This tendency has to be fought vigorously or it will become impossible to improve the NHS except by accident.
"Bullshit is a greater enemy of truth than lies are…" Tim Harford
Gresham's law is a very old principle in economics (it has been known since Aristophanes) which states that bad currency drives good currency out of circulation. I won't say more about the economic mechanisms here as this is an article about health policy where I've noticed that the same sort of problem appears to be happening there.


In short, dumb analysis of what the problems in the NHS are is starting to dominate intelligent, reliable analysis in debate and in policy making; dumb policies are driving out the good ideas that might make things better.
There are a number of reasons for this problem. One was dissected masterfully by Tim Harford in a recent FT article:
This is the real tragedy. It’s not that politicians spin things their way — of course they do. That is politics. It’s that politicians have grown so used to misusing numbers as weapons that they have forgotten that used properly, they are tools.
But we should not rush to blame politicians for dumb analysis of problems in the NHS. Many in the commentariat and many workers in the NHS are keen to shortcircuit good focused analysis and substitute attractive but dumb policy ideas.
There are, unfortunately, a large number of plausible analyses of key problems are that are just wrong. And, if you leap from bad analysis to policy, an equal number of solutions that sound good but will simply waste time and resources because they won’t work. (I would say obviously won’t work but that is apparent only to those who do the detailed, dirty work of actual analysis and the bad ideas are often attractive only because nobody has stopped to do any actual analysis or because nobody has paid any attention to the analysis that has been done). Leaping from symptoms to treatment with no intervening effort on generating a correct diagnosis is bad in medicine and just as bad in management.
One major cause of the Gresham effect here is that many proposed solutions sound good. So they take up space in newspaper headlines and discussion and thereby exclude the more nuanced solutions that require some explanation of why things are a bit more complicated than that. Hence Jeremy Hunt’s repeated assertion that death rates are higher at the weekend. It is a nice, simple idea that sounds plausible and backs up one of his favourite policies: a 7-day NHS. But the reality is the analysis is complex; we probably can’t be certain that the mortality really is worse at the weekend; and we certainly don’t know what causes it even if it is true. So using it as a crutch to support an attractive policy (who doesn’t want the NHS to work the same at weekends?) is deeply misleading. It is particularly misleading because even if we need an NHS that works the same way at the weekend it is far from obvious that changing doctor’s contracts will make any difference. We have plenty of operational evidence that the NHS doesn't work well at weekends but it doesn't point the finger at medical staff as the key problem.
Another zombie idea that wasted space in policy and newspaper headlines was the idea that problems in A&E were caused by changes in the GP out-of-hours contract. Superficially it looked like the numbers attending A&E grew strongly after the contract was changed. But that was coincidence: England started counting attendance at minor injury units (and the number of such units grew rapidly) around the same time as the GP contract was altered. Core attendance at major A&Es (which is where all the problems with treatment speed are) didn’t change from its long term trend. And those who know the statistics also pointed out that few people attend A&E at night; volume is far higher during the day and peaks when GPs are still open.
In fact policy about A&E is littered with dumb ideas that simply can’t be reconciled with the actual data. The idea that A&E performance is declining because of too many people turning up is attractive. So there are repeated discussions about policies to respond to this: diverting patients somewhere else; massively increasing staffing in A&E; putting GPs at the front door… They all sound like they might do something. But every dataset we have says the performance problems have nothing to do with volume. In fact the best analysis says the biggest cause of slow A&Es is nothing to do with the A&E department at all: it is about the inability of hospital wards to accommodate the flow from A&E admissions. Spend all you want on the other policy ideas, but, if you ignore that bottleneck, you are wasting your money. Sadly, every time A&E performance deteriorates, we get a torrent of bullshit policies and almost no commentary that tries to identify (or points out that we have already identified) the most important problem and can do something about it.
One of the most important areas where bad ideas squeeze good ones from the arena is money. The NHS as a whole could probably use more money and probably should get it. Many parts of the system have been campaigning to get more of the budget for their activity. GPs complain that their share of the NHS budget has been falling (and then describe this as "cuts" when it isn't). They claim they are swamped by patient demand and can't cope without vastly more investment. The problem here typifies the way bad ideas squeeze out good ones. The bad idea is that all the problems are caused by lack of (or will be solved by more) money. Nothing else matters. There is therefore almost no discussion of whether anything other than an increased budget could make the life of a GP better or help the GP do a better job for patients.


Yet there are concrete examples that show GPs who pay attention to how they match their capacity to the things patients actually want (the demand) can dramatically lower their workload at the same time as improving patient satisfaction. Flexible attitudes to how patients needs are met and wider use of modern technology can more than fill the perceived gap in capacity that GPs campaign about. But this gets almost no attention in the debate as operational ideas that work are squeezed from the arena by demands for more money.

The idea that the only problem is money is insidiously dangerous across the whole NHS. I'm sure the system could do better with more. But to focus on campaigning for more money and forget all the other things that could be done to improve things is disastrous for several reasons. One is that getting more money is unrealistic in the short term; we should be seeking improvements that can make a difference right now. Another is that getting more money without fixing some of the current problems is a likely to guarantee that the money will deliver far less benefit than expected should it ever arrive. If we lack good management systems that create an awareness of where the real problems are, we will spend extra money on things that don't address the problems and have little impact on the actual problem. A belief that money is the only problem pushes out any thinking on the problems we could fix right now without any extra money and prevents us acting now so we spend any future money on the areas that will yield the largest benefits.


Another attractively populist idea is encapsulated in the slogan "more resources to the front line". It is attractive because it makes a good slogan. It feeds the popular myth that bureaucracy consumes too much money for no useful purpose. It panders to the idea that every problem is solved by having more front-line staff. Sadly every analysis suggests the NHS is extraordinarily undermanaged (see my comments here). While it is possible to be too bureaucratic and undermanaged at the same time, cutting the budget for management is not exactly an effective response. One of the biggest problems in the NHS is a failure to coordinate care and that is a management and information problem that gets harder not easier when you have more medical staff to coordinate. And not just across organisations but inside them. In many hospitals settings the biggest failures in both quality and productivity come because the activity of different people is not well coordinated. This affects how we discharge patients in a timely way; it damages the throughput of operating theatres; it hurts patients because their medication is screwed up; it guarantees long waits in A&E because it is hard to find free beds (we don't coordinate the discharge process with the demand pattern for emergency beds).





So what?


The battle against statistical bullshit and Gresham's law must be fought. The more bad ideas are allowed to dominate debate and policy making, the less improvement will actually happen in the NHS.


Part of the problem would be addressed if the NHS collected better data about what actually happens on the shop floor. Too much of the data currently collected is focussed on top-down performance management rather than identifying and fixing operational problems. The central management style that demands ever more performance reporting (as criticised in this excellent rant by Nigel Edwards) drives out intelligent thinking about the root causes of problems (not least because it consumes so much of the scarce management time available for the operational managers who should be problem solving). Worse, the senior management of hospitals have a worrying tendency to collect data just for performance reporting while neglecting to collect the data they should acquire so they can understand the causes of their operational problems.


Even when we do collect useful information we tend to collect it slowly and make shamefully little use of it to derive operational insights. Patient-level data on A&E performance, for example, has had almost no influence on where money is directed in attempts to solve the persistent decline in A&E performance (see my argument here). We need to make more use of the big datasets for improvement and we need better tools to enable managers to get to those insights more quickly.


Having good analysis of the problem isn't enough. We also need to communicate those answers in ways that actually influence people. Data isn't convincing by itself: it needs to be turned into a message that works for the different audiences that can make a difference to what gets done. Partly this is about paying attention to how data is communicated: good data visualisation is an often neglected first step. But we also need to tell convincing stories. Bad ideas propagate not just because they are often unchallenged but because they are encapsulated in convincing, plausible stories. Gresham's law applies because the bad ideas sound more plausible than the good ones and attract more attention in the commentariat, the policy makers and the operational managers. Counteracting this with analysis isn't enough: we need better stories about what works as well.

The fight against Gresham's law must be fought. If it isn't, the NHS will continue to waste effort on initiatives that won't help it improve. Even if it eventually gets more money, much of that will be wasted because it won't be focussed on addressing the real bottlenecks to better performance. Britain's most loved public institution can't afford that.