The unintended effects of the EU’s crackdown on ‘disinformation’
The Code was entered into by Facebook, Twitter, Google and Mozilla and some industry bodies. It was formulated in response to a Commission Communication from April 2018, Tackling online disinformation: a European Approach. It includes ambitious, and rather chilling, provisions for signatories to “invest in technological means to prioritize relevant, authentic and authoritative information in search feeds” and to “dilute the findability of disinformation by improving the findability of trustworthy content”. Who is the judge of relevance, authenticity and authoritativeness? How are web platforms and search engines supposed to determine what is trustworthy? The answer the Commission has in mind seems to be a network of trusted flaggers and fact checkers – ‘independent’, of course, but supported and facilitated by the Commission. Reliance on fact checkers is questionable in general terms due to practical limitations and the risk of bias and error. But reliance on fact checkers supported and endorsed by the state and large incumbent technology companies to filter content is alarming. Apart from the risk to freedom of expression and association, working so closely with large incumbent operators to control content on the internet contradicts the stated policy aim of the Commission and member states to encourage competition in the digital sector.
In fact, as argued by Julian Jessop in the IEA briefing “Supervising the Tech Giants”, “It is notable that ‘fake news’ appears to be more successful where there is less competition – in Russia, for example. Diversity in media sources is therefore essential. Indeed, this may be one way in which the internet actually reduces the threat from ‘fake news’, which is why so many authoritarian regimes choose to restrict access to it.” In Jessop’s view, “the media should be viewed in the same way as any other economic activity. This means that, in general, consumers should be free to decide what to watch, hear and read, without having their choices limited by politicians, regulators or a handful of dominant producers.”
Signatories to the Code of Practice committed to report monthly to the Commission in the run up to the European Elections in May, as established parties worry about being routed by populists and parties that wish to disrupt the status quo in the European Parliament. The Commission’s statement on the latest reports published on 23 April does not mention the difficulties encountered with pan-EU campaigning, but welcomes the efforts on increased transparency, though it calls for “further technical improvements … necessary to allow third party experts, fact checkers and researchers to carry out independent evaluation.” The operation of the Code of Practice seems set to be a textbook example of a regulatory regime that does not improve efficiency, but serves the interests of those with political power, in this case including the tech giants themselves, by building ever greater barriers to entry.
Will this experience of unintended consequences be chastening to legislators, within the EU and in member states, currently competing to see who can be strictest and most serious about cracking down on online harms? In this case MEPs and Commissioners were well placed to signal their irritation at the effect of the Code of Practice and influence the future direction of the regulatory framework and terms of business of the tech giants involved. Most users and providers of digital services do not have this privilege and will bear the costs of ill thought out regulatory interventions for a long time to come.