As was the case when the book first came out, the majority of detractors did not actually address its content, but engaged instead in speculation about what motive I could possibly have for making that case. The attributed motives range from being a hired gun in the pocket of shady corporations to being an incorrigible ideological fundamentalist (some even manage to believe both of those – clearly mutually exclusive – things at the same time), to being just generally a bad person.
Let’s assume that all of those things are true. Even the mutually exclusive ones, through some magic. Let’s assume that I am indeed guided by the most sinister, insidious motives one could possibly have. That would still not mean that my findings are wrong. As far as this book is concerned, my motives are actually not that important.
Presumed motives sometimes matter. Assessing each and every argument on its own merits, regardless of where it comes from and what motives the person/organisation making it might have, is not always feasible in practice. Take a clinical trial which involves some potential conflict of interest. It is obviously not possible for you, or me, to try to replicate that trial, and double-check the results. It would be prohibitively expensive, and require an extremely high level of technical expertise. We can either trust the study, or not trust it. We cannot check for ourselves. Or take those infamous intelligence reports about weapons of mass destruction in Iraq: It would have been quite difficult to try to check for yourself.
Thus, while it’s an extremely crude heuristic, it’s not always completely irrational to say “I don’t know how to critique or refute those findings, but I still don’t believe them. I just don’t trust the person or organisation that came up with them.”
However, this case is a bit different. In this book, I’m not producing any new primary data. I just gather data that is already in the public domain, sift through it, and look for patterns. Therefore, there is nothing in the book which you could not verify through a quick Google search. There is not a paragraph where I’m saying, “Just trust me on this one. Take it from me.”
I collect data, but I cannot change it. It is what it is. I have to take it or leave it. If the NHS consistently outperformed the systems that I present as desirable alternatives, then there would be nothing I could do about that. You don’t have to trust me. But surely, the OECD, the Lancet, Eurocare, Eurostat, the WHO, the Commonwealth Fund, Cancer Research UK, and the other sources I’m using can’t all be in the pocket of Big Evil; they can’t all be free-market fundamentalists, and they can’t all be staffed by bad people.
I could, of course, have cherry-picked my indicators. I could have ignored a wealth of indicators on which the NHS does brilliantly, and selected a few wildly unrepresentative ones on which it does badly. After all, modern health systems deal with thousands of different conditions every day, and almost all systems will do well in some respects, and badly in others. Doesn’t this mean that an author can just pick and choose whatever suits their narrative?
Not quite. While health systems do indeed deal with thousands of conditions, for most of them, there is no obvious measure of success. It is not clear how we could tell whether this system or that system is generally better at dealing with, say, chronic respiratory problems. This is why international comparisons are normally limited to conditions that are matters of life or death. For these, we have an obvious measure of success: the survival rate.
This narrows the range of possible indicators considerably, and it reduces the scope for cherry-picking: There simply isn’t that much to pick from. However, insofar as that temptation still exists, I avoid it anyway, because I simply select those conditions that affect the largest number of people. For example, there are over 100 different types of cancer. You can find varieties on which the NHS does quite well, and you can find varieties on which it does very badly. You could cherry-pick if you wanted to.
But I pick them on the basis of how common they are, not on the basis of outcomes. More specifically, I pick the five most common ones, which, taken together, account for nearly 200,000 cases per year, or 56% of the total. It is fair to say that if a healthcare system does badly on these five, it does badly on cancer care overall. I then do the same for other conditions. And I have yet to see anyone explain what’s wrong with that approach.
Either way – the results don’t look good. The NHS is consistently behind Western Europe, North America and the developed parts of Asia. It is usually about on a par with the Czech Republic and Slovenia. If you looked at the figures without knowing which data point represents which country, and tried to guess, you could easily mistake the UK for an Eastern European country. You would certainly never confuse the UK with Switzerland or Belgium. This is true irrespective of my motive for saying it.
And if it turned out one day that the book wasn’t actually written by me, but jointly authored by Ramsay Bolton, Joffrey Baratheon and Cersei Lannister – the findings would still stand.
- ‘Universal healthcare without the NHS‘ by Kristian Niemietz
- ‘Winter is coming. NHS crisis talks‘. Podcast by Kate Andrews and Kristian Niemietz