Psychologists now have file cabinets full of findings on ‘motivated reasoning’, showing the many tricks people use to reach the conclusion they want to reach. When subjects are told that an intelligence test gave them a low score, they choose to read articles criticizing (rather than supporting) the validity of IQ tests. When people read a (fictitious) scientific study that reports a link between caffeine consumption and breast cancer, women who are heavy coffee drinkers find more flaws in the study than do men and less caffeinated women.”
The responses I have received since the publication of my UK 2020 paper The UK health system: An international comparison of health outcomes provide a wonderful illustration of the psychology of motivated reasoning.
The paper finds that the NHS lags behind the health systems of comparable countries across a wide range of different measures from different sources. These are not just any cherry-picked measures: they are some of the most widespread conditions in developed countries, which affect thousands of people in the UK. A health system which fails so consistently on these important measures can be said to be a failure overall.
The table below compares UK death rates for a number of conditions to the death rates observed in the country which comes out 12th best in the respective category. The column on the right shows the number of lives that could be saved every year if the UK death rate could be cut to that of the 12th best country.
Mortality rates: UK vs 12th best country, and conversion into number of lives lost
lost per year
(Iceland & Australia)
Avoidable deaths per 100,000
These are sobering findings. But the NHS is also the national religion – cue dancing nurses forming the letters ‘N’, ‘H’ and ‘S’ – so the findings must obviously be wrong.
If you are looking for an excuse to dismiss the paper, you don’t need to look for long. Nothing could be easier. International comparisons of health outcomes are never a straightforward business. Even the most widely used and well-respected measures and data sources can be critiqued from various angles. Consequently, people who were angered about the paper’s findings took to Google, dug up the numerous measurement issues with the indicators I’m using, and then rushed to social media to declare that the paper had been ‘DEBUNKED’, ‘DEMOLISHED’, ‘DESTROYED’, and so on.
A good example is this response by Dr Margaret McCartney, a GP who writes for the British Medical Journal and broadcasts for Radio 4. McCartney throws everything she can possibly find at every indicator used in the paper – and yet, in doing so, she entirely misses the point, because I don’t actually dispute any of that. In fact, most of her article just repeats caveats that I have already acknowledged, and discussed at great length, in the paper itself.
If you demand absolute perfection, you are not going to find it in this report, or in any other report on that subject. But there is a world of difference between ‘the data are not perfectly reliable’ and ‘the data are systematically biased against one country, namely the UK, and one health system, namely the NHS’. There is lots of evidence for the former, but none whatsoever for the latter. Yet NHS purists always seem to mean the latter when they say the former.
Let’s put it this way. Suppose a balance is faulty: we know that is systematically overstates people’s true weight, but we don’t know by how much. According to that balance, my weight is 95kg, and yours is 90kg. We know for sure that both numbers are wrong. But it is still fair to say that:
- I am heavier than you, and
- the difference between us is about 5kg
What would you make of it if I protested: “But that’s not true! The balance is faulty, and it automatically follows that in reality, you are the heavier one.”
That would not quite cut the mustard, would it? Pointing out that the balance has some fault is not enough: I would have to show that the balance is systematically biased against me. I would have to show that, far from indiscriminately adding x kg to everybody’s true weight, it only adds a small amount to yours (or maybe it even subtracts from it), and a huge amount to mine. It does not just have a random measurement error – it is systematically set up in order to make me look fat (at least relative to you).
And that is the main problem with McCartney’s ‘rebuttal’. It does not seem to occur to her that if an indicator overstates UK death rates, it will also overstate Swiss, Dutch and Belgian death rates. She would have to show that the relevant measurement issue exists only (or at least disproportionately) in the UK, and that Switzerland, the Netherlands and Belgium are somehow immune to them. Of course, she offers no reasons why this should be the case, because there aren’t any.
The two methodological papers on cancer survival rates which McCartney links to certainly show no such thing. The first paper explains the many steps that researchers take to correct for the various measurement issues, and how the data is getting more reliable over time. The second one is a paper from fifteen years ago, which talks about the teething trouble of cancer registries in Mediterranean countries, because these countries, apparently, started the data collection process much later than Northern Europe. Interesting – but what’s that got to do with my paper?
I have seen this type of response from the Church of the Sacred NHS many times. They start by pointing out that there are lots of problems with measuring health outcomes, which is, of course, true, and for which they can present lots of good evidence. But they then automatically jump to the conclusion that all international data must have a systematic anti-NHS bias, and that if only we could sort out the measurement issues, the NHS would come out as the world’s best system. This strikes me as a bit of a leap.
McCartney and I agree on one thing: every single measure of health system performance has its drawbacks. We just draw different conclusions. My conclusion is that we should hedge our bets, and look at a package of indicators, drawn from a variety of sources (which is what the paper does). If the NHS does badly on one or two measures, never mind. But if it consistently does badly – and it does – then we should mind. I am not quite sure what McCartney’s conclusions are. Is it that we should stop using data altogether, and just take it as given that the NHS is obviously the world’s best healthcare system?
Maybe it does not matter. McCartney’s real beef with the paper seems to be about the policy conclusions that people might draw from it, not the data per se: the article carries the ironic headline “NHS is failing and must obviously be privatised”, which is interesting because the paper does not actually mention the dreaded p-word (or any synonym of it) a single time. More, it does not talk about policy conclusions at all. Don’t get me wrong: of course I would privatise the NHS. If I could push a button that would replace the NHS with a market-based system, I would push that button faster than you can say “envy of the world” or “…but the Commonwealth Fund…” or “Harry Leslie Smith”. But that’s not what this paper is about.
And it is by no means the only conclusion one can draw from it. On social media, some responses have been along the lines of “Of course the NHS is failing – but that is only because of Tory austerity, privatisations and the PFI!”. That’s not the conclusion I would draw, but there is nothing in this paper that could stop people from drawing it. I wonder whether McCartney would have felt the same need to benchmark the paper against impossible standards if I had presented it in those terms.
Dr Kristian Niemietz is the author of the report ‘The UK health system: An international comparison of health outcomes’, published by UK 2020.
Watch his presentation at the report’s launch event here.