Economists – know your limits!
SUGGESTED
Certainly, economic experts did not come off well in the financial crash. This was one of the most cataclysmic events in developed world economies since World War II. The fact that most mainstream economists missed the signs of the crash and did not forecast it is well known. Indeed, it is so well known that, in November 2008, HM Queen visited the London School of Economics and asked about the crash: ‘Why did nobody notice it?’. That was a good question. The first sentence of the last Bank of England Financial Stability Report issued before the financial crisis started in the UK read: ‘The UK financial system remains highly resilient.’
Just as the crash was not predicted, forecasters hardly covered themselves in glory in the period afterwards either. In 2009, the Bank of England believed that there was a negligible probability of inflation rising over 4 per cent within two years. In fact, it rose to 5 per cent. The over-estimate of growth figures by the OBR after its inception in 2010 was enormous. This was not equal and opposite errors in every year, suggesting that the timing of growth was different from forecast, but errors in the same direction year-after-year. In other words, something was missing in the model.
Further errors were made around the Brexit vote. Just before the Brexit vote, the Bank of England expected the economy to grow by 2.3 per cent in 2017. After the Brexit vote, it adjusted that to 0.8 per cent before upgrading the forecast to 1.4 per cent in November 2016 and 2 per cent in February 2017.
There are at least two overlapping problems when it comes to economists forecasting. The first is that economists focus in their thinking on what is measurable rather than on what is important. The second is that economists have come to both over-value formal modelling and over-value spurious precision in modelling. These were important topics discussed by F. A. Hayek, not least in his Nobel laureate lecture.
Let us take the example of monetary growth and inflation. Presciently, Mervyn King noted back in 2002 that central bank models do not include money despite money being the main driver of inflation. King said that he believed inflation was a disease of money and that there were real dangers in central banks relegating money to a ‘behind-the-scenes’ role. Specifically, he said: ‘My own belief is that the absence of money in the standard models which economists use will cause problems in future.’
How right he was. Indeed, at least in the US, excess monetary growth almost certainly contributed to the asset price inflation that was a cause of the crash. Mervyn King was an expert, but he was an expert who knew that what was not in the model was more important than the variables that were modelled. Unfortunately, central banks – including his own – did not heed his advice.
The reason that macro-economic models do not include money is that the relationships between the supply of money and the economy are not easy to model. Typical models based on aggregate demand and supply and output gaps are easier to construct and test. But, in focusing on what is measurable, such models miss what is important.
Modelling in economics is that it is not like modelling physical sciences. Economic outcomes depend on the behaviour of seven billion people all with a will of their own. Economists over-estimate their ability to model with accuracy, but what economists can judge is ‘tendencies’ or ‘patterns’. We know that a minimum wage will probably increase unemployment. However, we don’t know by how much, amongst which groups, over what timescale, and so on. There are too many factors and interactions to understand the magnitude of the effect with any precision. Will people try to work more if there is a higher minimum wage? Will companies lay off workers, reduce hours, reduce fringe benefits or try to work employees harder? Will a minimum wage make imported competing goods relatively cheaper or will the increased demand for imports lead to a lower exchange rate thus nullifying the effect? Will labour-intensive industries gradually be replaced by more capital-intensive competitors? Will immigration reduce – or perhaps increase? The range of questions that have to be considered to understand the precise impact of a policy change is enormous and beyond economic modelling.
Indeed, in an era when economists are wheeled out more and more to present their forecasts, in the physical sciences, it is being appreciated that we know less than we thought we did. It is very clear, for example, that the impact of man-made climate change is very difficult to predict. Just to give one example, it now appears more likely that climate change will lead British summers to be wet and windy rather than hot and dry – but views on that might change again. What will happen to winters is anybody’s guess. These things depend on the interaction between the Gulf Stream, the melting of the polar ice caps, the position of the jet stream and the salt content of sea water. Something is likely to happen, but, quite what and to what extent, we don’t know. The effect is especially unpredictable given the interaction of natural climate change with man-made impacts. If climate change modellers were clearer about this, perhaps they would not be dismissed quite so readily.
A good dose of humility would be good for economic modellers too. Perhaps a bit more focus on theory and less on number crunching would help economists understand better what the effects of policy changes will be. Economists should not need to pretend that we can predict things that do not really matter to several decimal places to justify our value to the world. After all, the really big questions, such as ‘what institutional frameworks best promote economic development for poor people?’ do not require answers to three decimal places.
7 thoughts on “Economists – know your limits!”
Comments are closed.
Interesting. I would have thought that a model could be tested against historical values where the outcome is know to check the model.
Chaos theory has some things to say about the limits of predictability, but I think Paul Ormerod is the only economist who takes that seriously. Perhaps the maths is too difficult. As to Steven Procter’s comment, this illustrates perfectly the gap in understanding of the general public. (I don’t want to be mean!) If the dynamic system that is the economy goes into a different regime, then all historical data will be invalid. What do you test on then? Chaotic systems have “pockets of predictability”, periods where they look like there’s a pattern, and then they change completely and are off somewhere else. Even the simplest chaotic systems do this. This leads people to believe they understand more than they really do. Then they end up stymied. Exactly the same ting has just happened to the pollsters. The answer is to embrace 20th century maths rather than 19th. Professor Booth is definitely heading in that direction.
Philip, I, of course, agree with you and it is refreshing to hear comments like yours, debunking so called “experts” who have failed miserably applying flawed econometric models based on invented data, who tell us what will happen in absolute terms. Forecasting makes sense only when it is based on stochastic models, and probability confidence levels of a particular scenario occurring are clearly set. While I don’t subscribe to most of John K. Galbraith thought, I wholeheartedly agree with him when he declared: “The only function of economic forecasting is to make astrology look respectable”
A good critique. In my experience, the first question in response to any forecast should be: on what assumptions is it made? I was taught that the value of a forecasting process is not the numbers that appear at the end of it but the whole chain of reasoning and evaluation of the various factors, drivers, relationships, correlations and risks which are associated with making a forecast.
Some decades ago George Box said:
“All models are wrong, but some are useful”
And in Mervyn King’s recent book he says:
“It is better to be roughly right than precisely wrong.”
Models are approximations of what could happen if – and only if – all the assumptions used in the model prove to be correct, AND if the model includes all, or nearly all the required assumptions. That can be just about right enough to send a spacecraft to Venus, but it is obviously nowhere near true when modeling complex, chaotic, non-linear systems such as economies or climates.
The economist Victor Zarnowitz taught me in the 1960s at Chicago University that in economics “next year will be the same as this year” is as good as it gets. That served me well for decades in financial roles in the USA and U.K. It is gratifying to hear that it is still true!
The truth is that, unlike the rest of economics, macroeconomics is still stuck in a state like Moliere’s 17th century doctors, mumbling to each other in their own language, making their unscientific diagnoses and prescribing their patent remedies with no perceptible benefit to the patient whose death is greeted with an insouciant shrug of the shoulders. In the same way, the failure to foresee or even to attach a nonzero possibility to the 2008 crash has prompted little soul-searching and certainly no sign of a crisis in the world of academic macro, with colleagues quite relaxed about their failure, as if it is only the ignorance of the public (who mostly pay their wages) – not to mention the Queen – that leads them to question the usefulness of a profession that can do little more than give us a slightly better-than-evens guess as to whether next year’s growth will be 1% or 1.5%.