30 September 2020

Safety without censorship: the UK must choose a better path to internet regulation

By Caroline Elsom

When it comes to tackling online harms, the Government’s rhetoric is always about being the toughest, going the furthest, and leading the world in rooting out danger wherever it lurks.

The Online Harms White Paper, published last spring, sought to do this by handing unprecedented powers to Ofcom. Under the plans, the regulator would be able to impose fines and individual criminal liability on directors, and even block the activities of businesses it deems not to have complied with a new ‘duty of care’.

Some might see these as vital measures to ensure our collective safety, but they risk making a virtue of heavy-handedness, with the regulator given a role as expansive and intrusive as possible. In aiming to be on the cutting edge of policy innovation the Government may end up on the precipice of failure. 

While Covid-19 and Brexit continue to delay the progress of this legislation, there is an opportunity for the Government to reflect on the course the White Paper sets us on and choose a better path. My new report for the Centre for Policy Studies, Safety without Censorship, offers an alternative vision that would give targeted powers to Ofcom while clearly separating its responsibilities over legal and illegal content.

The world watches what the UK does and our decisions have consequences. As the G7 presidency looms next year, it is vital that the UK, does not set harmful precedents or trample our flourishing online ecosystem.

We only need look to our near neighbours to see how wrong it can go when regulation empowers censorship. Germany’s Network Enforcement Act (NetzDG) mandates that social media companies delete what it calls ‘manifestly unlawful’ posts on their platforms within 24 hours of being notified. Fail, and they risk fines up to £44 million. Within a few months of its introduction, the legislation had been used to censor content from political parties, political satire and even the very politicians who pushed for its introduction.

It’s telling that the NetzDG model of intermediary liability has since been proposed or adopted in at least 10 countries which are classed as ‘not free’ or ’partly free’ in Freedom House’s 2019 assessment of Freedom on the Net. Several of these countries now require internet intermediaries to remove content falling under such broad categories as ‘defamation of religions’, ‘anti-government propaganda’ and even ‘unreliable information’. When the latter category was adopted in Russian law, the Kremlin explicitly cited NetzDG as an example of false information being “regulated fairly harshly” in other European countries, making it necessary in Russia too.

Yet the UK’s proposals would go much further than this use of ‘notice and takedown’ orders. Even with some clarifications in the Government’s initial response to the White Paper’s consultation, the wider ‘duty of care’ responsibility for harm prevention may still effectively require platforms to actively monitor and filter content to comply.

Similar rules on ‘false statements of fact’ in Singapore’s Online Falsehoods and Manipulation Act 2019 have already been used to target an opposition party, an independent news outlet and the Malaysian Government. For example, the opposition Singapore Democratic Party received a correction notice after online posts reporting on rising redundancies and unemployment among Singaporean professionals. Nigeria and Thailand have since announced that they intend to emulate Singapore’s legislation.

In developing countries, crackdowns on free speech online have already had devastating consequences for internet access for their citizens. In Uganda, a new social media tax has been introduced, charging users to access sites – all in the name of curtailing ‘gossip’. Beyond the dystopian implications, these measures have a deeper impact on the poorest in society by effectively pricing them out of the internet.

Over the border in Tanzania, content creators now have to pay an eye-watering £770 in registration and licensing fees. Contributors’ details must be stored for 12 months, financial sponsors disclosed and internet cafes must have surveillance cameras. And once they’ve jumped through all those hoops, creators could still be hit with a £1,750 fine, or even jail time, if their content is deemed ‘indecent’, ‘leads to public disorder’ or is simply deemed ‘annoying’.

So, striving for a ‘tough’ approach to internet regulation certainly puts the UK in some uncomfortable company. It also gives repressive regimes just the sort of excuse they crave to clamp down on their citizens’ human rights.

Even if they are not persuaded by arguments about censorship and civil liberties, the Government should recognise the economic case for diverting from its current path. While imposing heavy duties on content providers is hugely advantageous to tech giants like Facebook, who have the resources to meet tight takedown deadlines, it’s a nightmare for smaller start-ups whose pockets are nowhere near as deep. Ultimately it’s not just small businesses, but ordinary consumers – the very people we are so keen to protect – who will lose out as a result.

With world leadership comes global responsibility. If the UK wants to succeed at creating the best system for tackling online harms, it must take a more balanced and better targeted approach to internet regulation than the sweeping duties outlined in the White Paper. Quashing freedom of speech and free enterprise online has dangerous consequences, not just here, but across the globe.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Caroline Elsom is a Researcher at the Centre for Policy Studies.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.