Attempts to eliminate online harms could do “great damage,” says new IEA research
New research paper from the Institute of Economic Affairs warns of unintended consequences from regulating online content

– Governments are increasingly concerned about what they consider to be the “harmful” content that is widely available through digital platforms.

– To counter these perceived harms, governments are seeking to monitor and gain access to these communications.

– Their efforts are not only likely to fail, but would put free speech and (virtual) assembly under threat.

New IEA research, titled ‘More harm than good?’ and authored by Head of Regulatory Affairs Victoria Hewson, examines the risks associated with state regulation of online communications.

Digital platforms have become part of everyday life and, until recently, were considered to be socially beneficial. But governments are growing increasingly worried about what they view as “harmful” content. It has been claimed that democratic processes have been subverted by online disinformation and misinformation, and that children and even adults are at risk of psychological harm and exploitation from offensive or inappropriate material.

In response, governments are now seeking to monitor and gain access for reasons of security and crime prevention. To that end, measures are being pursued to counter perceived harms – including the EU’s Code of Practice on Disinformation and the UK government’s forthcoming Online Safety Bill.

The research paper begins by setting out the “staggering” breadth of harms the government wishes to counter. It points out that many laws and regulations in the UK already exist to govern the lawfulness of speech – and apply equally in the online world as offline.

The paper then documents steps taken to date to regulate online content. It flags the shortcomings of the Online Harms White Paper, challenges the decision to hand Ofcom responsibility of safeguarding free speech and regulating material shared online (which “does not augur well for freedom of expression and association”) and, more broadly, argues that the whole basis of the Online Harms agenda is “misplaced”.

There is “little evidence” that criminal activities are caused or exacerbated by the availability of internet platforms; if anything the internet has brought ‘hypertransparency’ to such activities. Rather than there being more harms and crimes in the world, they are more visible and this has given rise to a “moral panic”.

Not only could efforts to right a perceived wrong be futile, but they present a serious threat to freedom of expression and association. For instance, the idea that IT and compliance professionals simply need to apply their skills and foresight to design away harm, and that they will do so in ways that respect free speech and are free from bias, under a regulator capable of monitoring the compliance of potentially every interactive platform in the world, is “almost laughable”.

Lastly, the author warns that there will be economic costs if innovation and competition suffer and the vast consumer surplus from digital services dissipates.

Victoria Hewson, Head of Regulatory Affairs at the Institute of Economic Affairs and author of ‘More harm than good? The perils of regulating online content’ said:

“The Online Safety Bill is expected to be announced in the Queen’s speech next week. It is an extremely ambitious piece of legislation and it will be interesting to see how the drafters have approached the challenge of bringing coherence to the rather amorphous objectives outlined in the White Paper, while living up to ministers’ commitments to protecting freedom of expression. 

“It is currently popular to attack online platforms for censoring speech online, but it’s often misunderstood how much of this is already driven by regulation or threats of regulation, and how much worse it could become if platforms are formally mandated to filter content based on parameters laid down by government and regulators. 

“As this paper highlights, platforms have so far been protected from liability for user content as much to protect the user from censorship, as to shield the platform itself. We may come to regret chipping away at this protection.”