I hope that in terms of political orientation, “Medical Twitter” – that is, doctors with an active social media profile – is not representative of the medical profession in the UK as a whole. Politically, the doctors you come across on Twitter are usually somewhere to the Left of Kim Jong-un.

My main problem with Medical Twitter is not that it’s obsessed with ingroup-conformity-signalling, or that it’s baffled by dissent. That is true for most corners of social media. No, my main problem with Medical Twitter is that it’s exceptionally tedious. It is only on Medical Twitter that people still think calling an opponent an “ideologue”, who needs to look at “the evidence” rather than stick to “dogma” and “blind faith”, is an unbelievably clever and sophisticated point to make.

Here’s the thing: every single person in the world, who has a political opinion on anything in the world, is convinced that they are the reasonable one who is “just stating the facts”, and that their opponents are biased and ideological. Every single one.

It’s always the other guy who is the ideologue and the dogmatist, who “twists the facts to make them fit their pre-conceived narrative”. We all think that we just dispassionately look at the evidence, and that our conclusion is the one that any fair-minded, sensible and informed observer would reach. We all think that if our opponents weren’t so pig-headed and wilfully stupid, they would see that we are right, and that they are wrong.

This has to do with the psychology of motivated reasoning. When political and moral ideas are involved, beliefs are not just a means to an end. There are beliefs that we want to hold, and there are beliefs that we want to reject. So we often apply lax standards to evidence that seems to confirm the former (or refute the latter), combined with impossibly high standards for evidence that seems to confirm the latter (or refute the former).

What makes motivated reasoning so insidious is the asymmetry in our ability to spot it. We can easily see it when our political opponents engage in it – but it takes a lot of intellectual self-discipline to notice it when we do it ourselves.

That’s why we’re always convinced that our opinions are solidly grounded in facts and evidence, while our opponent is being unreasonable. And it’s why our opponent is just as convinced that their opinions are solidly grounded in facts and evidence, while we are being unreasonable.

The scope for motivated reasoning is endless, because it’s rarely possible to prove or refute anything with 100 per cent certainty. Even for facts that are fairly well established, there is almost always some residual ambiguity. And for the mind in motivated-reasoning mode, that’s all it takes.

Let’s take a simple example from economics: rent controls. Rent controls don’t work. They reduce the supply of rental housing, and cause all kinds of distortions in the housing market. That’s as well established as any finding in economics can realistically be (see here for a literature review), and the consensus among economists is as close to unanimity as it can realistically get (given that economists would probably also disagree on the shape of the earth).

And yet, rent controls remain popular. Not just among the general public, but among people who really should know better, such as journalists who frequently write about housing issues, or housing campaigners. More than once, I have seen, or participated in discussions which went more or less like this:

Person A: “Here’s Study A. It finds that rent controls don’t work.”

Person B: “Totally unreliable. The sample size is far too small. You’re citing a junk study just because it supports your free-market dogma.”

Person A: “Here’s Study B. It finds that rent controls don’t work.”

Person B: “Totally unreliable. The time period is far too short. You’re citing a deeply flawed study just because it confirms your neoliberal prejudice.”

Person A: “Here’s Study C. It finds that rent controls don’t work.”

Person B: “Means nothing. They are only looking at one very specific kind of rent control, which doesn’t tell us anything about rent control per se. You’re citing a completely irrelevant study, just because it suits your narrative.”

Person A: “Here’s Study D. It finds that rent controls don’t work.”

Person B: “This study is about a city which already had lots of problems before rent controls. It’s nonsense to blame this on rent controls, and you just do so because it tallies with your pre-conceived ideas.”

Person A: “Here’s Study E. It finds that rent controls don’t work.”

Person B: “This is about a city where rent controls were exceptionally badly implemented. It’s such a cherry-picked example. Says a lot that that’s the best you can come up with.”

What’s interesting about Person B’s style of reasoning – and Medical Twitter’s style is a lot like that, see e.g. here – is that none of their objections look like crude denialism. In isolation, those could all be perfectly valid objections. But if we look at the package, we can see what’s going on: Person B really, really wants rent controls to work, and is fiercely determined to find a reason to reject evidence to the contrary. And they will find one. Of course they will. You can always find one, if you try hard enough.

But what is certain is that at no point does it occur to Person B that they might be engaging in motivated reasoning. They are convinced that they are just exposing methodological flaws in dubious studies. They walk away from this discussion, convinced that they have utterly demolished Person A’s weak, ideology-driven argument.

How to reduce the temptation for motivated reasoning is another topic altogether (let’s just say that membership of a moral tribe, which rewards conformity and punishes dissent, does not help). For now, suffice it to say that calling an opponent an “ideologue”, and telling them that they need to look at “the evidence”, is not quite the brilliant argument that people in places like Medical Twitter think it is.


This article was first published on CapX.

Head of Health and Welfare

Dr Kristian Niemietz joined the IEA in 2008 as Poverty Research Fellow, becoming its Senior Research Fellow in 2013 and Head of Health and Welfare in 2015. Kristian is also a Fellow of the Age Endeavour Fellowship. He studied Economics at the Humboldt Universität zu Berlin and the Universidad de Salamanca, graduating in 2007 as Diplom-Volkswirt (≈MSc in Economics). During his studies, he interned at the Central Bank of Bolivia (2004), the National Statistics Office of Paraguay (2005), and at the IEA (2006). In 2013, he completed a PhD in Political Economy at King’s College London. Kristian previously worked as a Research Fellow at the Berlin-based Institute for Free Enterprise (IUF), and at King's College London, where he taught Economics throughout his postgraduate studies. He is a regular contributor to various journals in the UK, Germany and Switzerland.

2 thoughts on “The dangers of motivated reasoning”

  1. Posted 24/01/2018 at 14:32 | Permalink

    The discussion between Person A and Person B highlights one merit of motivated reasoning — namely, that it motivates reasoning. Person B’s determination to find fault in the studies may blind him to the folly of rent controls, but it does help to unearth whatever shortcomings there might be in the studies. This is a contribution to human understanding, which he might not have made if not so “motivated”. The problem is when they stop looking for the fault in the study and dismiss it simply because of its conclusion. Then there is no upside to their “motivation”.

  2. Posted 25/01/2018 at 17:35 | Permalink

    Here’s the real-life model for “Person B”:
    I don’t think this adds much to human understanding…

Leave a Reply

Your email address will not be published.