Research

Rethinking Risk and the Precautionary Principle


SUGGESTED

Regulation

This is the draft of a lecture given on Tuesday 7th November as part of the 'The Beesley Lectures: Lectures On Regulation Series X 2000' organised by Professor David Currie of the London Business School and Professor Colin Robinson of the IEA. This is reproduced with the kind permission of Dr Selzer

Tax and Fiscal Policy
Executive Summary

The book begins with an exploration of the development and meaning of the precautionary principle (PP). Definitions abound (at least 19 different versions exist in various international agreements and statements from different groups). At one extreme is the Greenpeace definition: “do not admit a substance until you have proof that it will do no harm to the environment.” This is both philosophically absurd and utterly misanthropic. It is absurd because it is impossible to demonstrate the absence of harm: regardless of how thorough is one’s assessment of a technology, it is always possible to miss possible harms. Taken literally, this would effectively shut down civilisation. Even if this is not Greenpeace’s intention, the focus on environmental harm to the exclusion of the effects (good or bad) that the technologies causing these putative harms may have on human beings is itself misanthropic. Greenpeace, like other radical environmental organisations, seems to want a zero risk society. But as Morris points out, “all human activity involves risk, so the only way to achieve zero risk is to die – which is not a very constructive solution to humanity’s problems!”

Risk is often conceptualised as a purely bad thing; something to be avoided. But some types of risk taking may lead to great rewards. Buying Microsoft or AOL shares when they first came to the market would have yielded spectacular returns, although to do so would certainly have been risky. Generally, people who buy stocks accept that the value of their investment can go down or up, but they think the risk of loss is worthwhile because on average the returns are better than safer investments such as bank accounts. Higher returns mean more wealth and more wealth means more safety. The same is true generally for technology: new technologies entail new risks, but they often enable us to control old risks better. Gas-fired heating entails a risk of explosions from leaks, but it reduces the risk of lung diseases resulting from coal-burning fires. But coal fires themselves reduce the risk of death from hypothermia.

Of course some risks are probably worth avoiding if at all possible: nuclear war, for example. But how far should we extend this precautionary approach? Most definitions of the precautionary principle are less extreme than Greenpeace’s. But the result is a ‘principle’ that is so vague as to be meaningless. Consider the definition agreed at the Earth Summit in Rio:

“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation”

Who can disagree with such a proposition? After all ‘full scientific certainty’ is impossible (it implies that we have a theory of everything from which all particular events can be reliably predicted, which we know is impossible), so a requirement of full scientific certainty is no justification for anything. Also, ‘cost-effective’ is a wonderfully ambiguous term, which is likely to mean different things to different people. To some, spending 10% of GDP on saving Borneo’s trees might seem like a cost-effective thing to do. To others, spending 0.01% of GDP on saving all the world’s trees might seem profligate. So what this ‘principle’ does is to limit the circumstances under which something that is impossible can be used as a justification for not doing something else that may or may not be desirable. Not much of a ‘principle’ really!

In reality the ‘precautionary principle’ is a meaningless soundbite that can be used to justify just about any policy, including quite contradictory policies. An example is given in the book by Charles Rubin, who discusses the relative merits of taking action to counter the threat of an asteroid attack. Professor Rubin points out that if a sufficiently large asteroid hit the earth, it might kill all life. Following the precautionary principle, the possibility of such an occurrence might justify the creation of an orbiting defence system comprising nuclear weapons that could be used to divert such an asteroid from its biocidal course. However, such a defence system itself could quite conceivably destroy all life on earth (either accidentally or intentionally). So, following the precautionary principle, such a defence system should not be built. A principle that can be used to justify both X and not-X does not seem to be much of an aid to decision-making.

The precautionary principle has become an excuse for imposing arbitrary regulations. Accepted at the national level, it is applied by unaccountable international bodies, such as the UN and its various affiliates, to notional problems promoted by environmental, consumer, and other ‘civil society’ organisations. These international bodies then promote the drafting of international treaties which, once signed, are used by national regulators to justify the imposition of restrictions that could not have been obtained through purely national legislation.

Aaron Wildavsky points out in Chapter 2 that trial and error is an effective means of discovering both the benefits and the drawbacks of new technologies, and has served mankind well throughout history. Whilst it is clearly inadvisable to try technologies that are known to have serious negative effects and few beneficial effects (the plague and nuclear war are examples), imposing a general prohibition on the use of new technologies until solutions have been found to all their potentially harmful side-effects is a recipe for stasis. This ‘trial without error’ [which is the environmentalists’ favoured version of the precautionary principle] is ultimately likely to be a less good way of regulating the introduction of new technologies than is trial with error. Permitting trials with small errors [a weaker version of the precautionary principle, which is favoured by policymakers] is an improvement on trial without error, but nevertheless encounters many problems and is ultimately likely to slow down progress, with attendant negative effects both economically and for the health of people and the environment.

The authors of the next few chapters deal with specific applications of the precautionary principle. Tony Gilland shows that the precautionary principle was used to justify restrictions on the commercial growing of GM crops in Britain in spite of the very ambiguous evidence relating to the possible impact of such crops on farmland birds. As a result, British companies are, without good reason, being prevented from developing a technology that may provide enormous benefits.

Henry Miller and Gregory Conko argue that the Biosafety Protocol to the Convention on Biological Diversity, agreed in Montreal in January 2000, which has been predicated on the precautionary principle, will most probably have the opposite effect to that intended, namely it will encourage extensive farming at the expense of biodiversity. The authors detail the evolution of the Protocol and its implications, as well as discussing the effects of the recent shift in direction of the Codex Alimentarius Commission.

Charles Rubin discusses the problem of how to deal with the (real but small) risk of the earth suffering a catastrophic hit by an asteroid. He notes that the precautionary principle has been used, on the one had, to call for massive spending on defending the Earth against invasion by asteroids, and on the other to call for a moratorium on the development of such a system, on the grounds that it might increase the risk of a nuclear exchange. To escape from this stalemate it is necessary to apply more conventional risk-risk or cost-benefit approaches. These might lead to the conclusion that it would be worthwhile to develop a monitoring system that can provide several years warning of any impending collision and thereby enable the people of earth at that time to invest in defensive measures. Meanwhile, some money might be spent on evaluating how to convert ICBMs and other nuclear missiles to the purpose of deflecting as-yet undetected asteroids.

Helene Guldberg shows that in Britain the precautionary principle has infected the way in which child-child and child-adult interactions are treated. As a result, children are being afforded excessive protection, which is actually hindering their development. Dr Guldberg contrasts the situation in Norway, where children are more free to experiment.

Bill Durodié describes how the EU’s use of the precautionary principle derives from and interplays with a general fear of new technology, sponsored by environmental and consumer groups, but getting its initial impetus from crisis events such as BSE. A case study of the EU ban on soft toys shows how scientific evidence is manipulated in the context of the precautionary principle in order to achieve a particular end.

Bruce Yandle discusses how the Kyoto Protocol, an international agreement justified on precautionary grounds as a means of limiting the possibility of catastrophic global climate change, evolved in a context of lobbying by various interest groups. The Protocol is best seen as a mechanism for creating a more centralised system of energy management for the world, with benefits for certain narrow and well funded groups (environmental organisations, bureaucrats) and costs for the majority of people.

Indur Goklany considers three instances where environmental organisations have argued that the precautionary principle should be applied: the imposition of swingeing restrictions on greenhouse gas emissions, a ban on the use of DDT, and a ban on the cultivation of GM crops. In each case, he shows that the conventional application of the precautionary principle fails to account for the impact on humanity of taking precautionary action. When viewed in a broader context (i.e. considering the desirability of taking precautionary action to prevent the loss of human life and harm to human welfare), the conclusions of applying the precautionary principle differ considerably. He shows that these supposedly precautionary measures would, in fact, be imprudent because they would increase overall risks to public health and the environment.

The final two chapters discuss some of the problems posed by subjectivity and the lack of certainty in science. John Adams argues that attempting to regulate risk can often have perverse consequences because perception of risk is a subjective matter, depending on a variety of factors many of which are known only to the individual decision-maker. In some cases, attempts to make us aware of very small risks may lead to an over-reaction with worse consequences than the original risk. The negative publicity in the mid-1990s concerning the risk of thrombosis associated with taking the second-generation birth-control pill led to an increase in the number of pregnancies, which carries a higher risk of thrombosis! Where the ‘problem’ under consideration is merely a threat – a ‘virtual risk’ – attempts at risk management, through application of the precautionary principle, are even less likely to be successful – and more prone to unforeseen negative consequences – than the attempts to manage real risks.

One of the reasons why the precautionary principle has been able to flourish is that there is and always will be uncertainty over the validity of scientific claims. In the final chapter, Robert Matthews offers insights as to how one might cope better with the inherently subjective interpretation of scientific research. He shows that dogmatic interpretation of research has in many cases slowed down the evolution of scientific understanding, and argues that Bayes’ theorem offers a better means of dealing with subjectivity. This theorem might well form the basis of a more rational approach to dealing with threats about which we are uncertain.

 



Newsletter Signup