Cummings’ piece is complex, referencing his own back catalogue of thinking on the state and politics, the reading of which is essential for anyone thinking of applying. I am therefore going to focus more narrowly on the microeconomics references in the piece, bad Nash equilibriums in Game Theory and Bayesian inference. The former, in this context, is “the problem” – why does government keep failing to deliver big projects on time and on budget, and then never learn from its mistakes – or more rarely successes – to do better in future? The latter is a mathematical tool for updating our beliefs, based on evidence that when applied repetitively, particularly via machine learning within Artificial Intelligence (AI) tools, then communicated effectively in the right environment for maximum impact, might enable government to make better decisions, faster.
This is contrasted with asking a generalist senior “public school bluffer” with no expertise or evidence for their best guess, and getting an answer that approximates to what they think you want to hear, while they assiduously bury bad news, then promoting them for their wisdom in not rocking the boat, rather than any evidence of competence or concern for public service. That at least appears to be his characterisation of the culture he is trying to change, while recognising the efforts of “brilliant individuals” within the system who are trying to help.
In brief, Cummings cites evidence that megaprojects (>$1 billion in cost) run by all types of government often fail and fail for reasons that are often repeated. From defence procurement to tackling climate change to lifelong learning, bad decisions are made, bad projects contrived, with bad leadership, which are then badly run, to the lingering harm of the public and any future governments now saddled with both the debts from failure and personnel trained to repeat it. Nothing changes.
Worse, government culture is designed to repel disruptive innovators who might do better, and in political leadership there is a willingness to “fail conventionally” rather than risk ridicule for thinking differently, even if that might yield longer-term success. It is not hard to see his experience of trying to reform the Department for Education in the Cameron era as shaping these thoughts.
The bad Nash equilibriums in this context are the assured failures of government projects due to the structure of incentives and interplay of decisions and actors that lead to them. Their repetition suggests current government learning strategies such as project reviews do not work, the incentives don’t change, the people don’t change (or rather the type of person doesn’t – too rapid change of personnel is part of the complaint, as is groupthink). Repetitive failure then is baked into the way government works, or rather does not work. Cummings does not believe it needs to be this way. What if small teams of highly able people, selected for their brilliance were able to sit outside the conventional structures of government and do things differently, encouraging change by example?
This is not entirely novel: Cummings cites examples of “getting it right” through successful megaprojects in history such as the Apollo and Manhattan Projects, although he notes their success was not sustained, in no small part due to the departure of visionary leaders, which begs a question as to why he thinks he can make himself redundant in a year. What is new is his particular take, applied to the modern context, particularly around the adoption of AI to improve project decisions and organisational learning within the machinery of the state. He wants brilliant minds in data science, software, economics, policy, communications and project management, unconventional enough to embrace concepts from each other’s disciplines (something Hayek was rather good at), while well managed enough to be focused on delivery, cognisant as to how their contribution contributes to whatever overarching objective they are set. It’s an exciting idea and a challenge to free-marketeers who believe that government, and particularly big government, is generally doomed to fail.
In Game Theory a bad outcome can be avoided by co-operation, changing incentives or breaking the game. Cummings I suspect is trying to do all three. Co-operation here means at one level cutting through departmental silos and the culture of “failing and moving on” to force government to learn from its mistakes. This is more organisational than technical, but within projects better communication is part of the process of how incentives can be changed. In a repetitive game we learn from our mistakes, but this only makes a difference if we can communicate that learning to other decision makers. How we learn is where Bayesian inference is important. If we believe we know the probability of an event and receive new information, we apply that information to update our assessment of probability. This inference then updates our calculation of pay-offs (or incentives) and might lead to us making a different decision. If we can convey that change to the other decision makers we can avoid the certainty of second best, and change the “minimax” culture where we seek to avoid losses rather than pursue success. At least I think that’s what he’s saying.
Bayesian inference more mundanely is a mathematical way of modelling how people think and importantly how they change their minds. That it is mathematical and repeated at scale, a computational challenge, makes it central to AI. Machines use maths, we use hunches, but those hunches are rooted in decades of acquired, processed and simplified memories, and how rapidly these can be accessed and compared – or our intelligence, to inform our decisions about what to do. When these hunches prove to be correct we call it a good guess, and when they do so repetitively we call it knowledge or expertise. Artificial intelligence then is hunches with maths, repeated faster than any human can, to acquire knowledge from guessing faster, and in doing so inform better decisions – at least in circumstances where that advantage can be established.
A relevant and current example of this is the tools Facebook and others use to improve the serving of digital adverts on social media. Ad-variants are constantly tested by the ad server, prioritising those that get the most likes or retweets, updating those priorities as people get bored and stop clicking. This is vastly more efficient than the way old media campaigns used to work, with instant feedback on advertising effectiveness rather than lengthy and costly market research measures providing planners with better hunches on concepts, and where and when to show them. Cummings doesn’t just believe this is revolutionary; he applied it to the Vote Leave campaign in 2015/16, claiming 98% of the marketing spend was on data-driven social media, whereas the politicians wanted to spend it on billboards. Nor does he claim this necessarily won the referendum, just that it was a far more effective use of resources than spending by the other side, and allowed him to prove a point about the skills he is seeking.
It is less clear how Cummings can use his “misfits” and these tools to achieve the outcome he desires in government. I can well imagine a new situation room in Downing Street that clearly and in real time visually explains to ministers why a corporate pitch to secure government funding for carbon capture and storage as a solution to climate change is doomed to failure in the UK, not just in the form presented but in a range of realistic scenarios rooted in evidence that largely point to Chinese leadership. I can well imagine them showing the returns possible from putting a fraction of that request into researching the extraction of biofuels from GM algae in solar tubes, without subsidising the pilot. But both those conclusions are possible now with conventional planning tools, albeit less engagingly presented. That though may be crucial to persuading a minister with attention deficit disorder and an inbuilt phobia for scary words to make a tough decision, with fewer opportunities for hi-vis jacket photographs.
I can imagine this team finding aspects of government activities, for example the processing of welfare payments, where machine learning could seriously improve the avoidance of harmful errors, and the training of staff in how to avoid them. But I’m not sure this constitutes a megaproject. I can imagine them applying some of these approaches to concurrent trade negotiations, ensuring each bilateral trade team is acutely aware of the trade-offs being requested by different potential partners and the quantum impact on UK exports and consumer choice. These could be genuinely revolutionary approaches that have serious impact. I could imagine attempts to scrap all ministerial and high-level civil service salaries, replacing them with incentive schemes linked to agreed evidence of actual success. I could imagine linking MPs’ pay to the state of the economy rather than their own notions of their importance relative to benchmark wealth creators and public servants. That might address aspects of the political incentives problem, at least until replaced by gender-pay gap targets by the next woke government.
It is then possible to see this as being quite ground-breaking and positive – that is if the Cummings disruptors can overcome resistance, and are allowed to fail as well as succeed, which they assuredly will from time to time (and this will trigger the politicians to try and shut them down).
There are though other issues, some alluded to in the opening links. It’s not obvious in some cases that the issue is the culture of government, more that the government is involved at all. We don’t clearly need state planning of agriculture in the 2020s and there is no big question of food security to solve as was feared in the 1940s, yielding nearly a century of ill-conceived interventions and malinvestment. Is climate change a mega-project or has making it a mega-project rather than a market been a major cause of failure to address it?
There is also the economic calculation problem, a test that should be applied to all the claims of big tech sales pitches as much as it was applied to the illusions of socialist central planning. Is the issue really either an insufficiency of data, the failure to use data to drive better decisions, or not having scientists involved; or is it that technocratic allocation, however sophisticated, is the wrong approach? In the NHS for example, it’s quite fun to imagine replacing NICE (the body that decides what drugs you can queue to receive in the hope you get them before being maimed or killed by waiting) with an AI immune to the entreaties of lobbyists. But it’s not clear why you’d do that rather than simply recognising other competent authorities and their recommendations, thereby allowing their sale more widely in the UK, potentially broadening access to better, cheaper treatments and therapies at a stroke. The AI’s decisions would still kill people and “the computer says no” is unlikely to appease the voters.
It would be good then if the future Downing Street Red Team included a few non-Reds to ask the difficult questions of any project proposal, “should we be doing this?” and “what if we did nothing?”, with their conclusions regularly published.
If that were a part of the project, alongside better managed mega missions, both could be genuinely transformative.