28 April 2021

The Online Safety Bill is pushing us down a dangerous track

By

The forthcoming Online Safety Bill is meant to make “UK the safest place in the world to be online but at the same time defend freedom of expression” (my emphasis).

The word “but” is doing a lot of heavy lifting in these proposals.

There will be a “duty of care” on digital companies but no new consequences for perpetrators of unlawful or gruesome behaviour.

It is meant to focus on serious online issues but will also target “offensive material” and any speech that could cause “significant adverse…psychological impact”. That is so broadly drawn it could include practically any speech on any controversial issue.

The state will not mandate removal of legal speech but they will require private companies to have guidelines that address “legal but harmful” content. If a company fails to comply with this dystopian mandate they will face gigantic fines and other penalties.

Free speech isn’t threatened, apparently, but “journalistic content” will have a special exemption. Don’t bother asking why nobody else gets that protection or what they mean by “journalistic content” – nobody has a clue.

Privacy will be protected but encryption must be curtailed.

Decisions will be made democratically but overseen by Ofcom, a quango with a poor history of protecting free speech.

Ofcom will create codes of conduct for legal and illegal speech but have no corresponding mandate to consider free speech.

The UK champions press freedom around the world but these new laws will target “disinformation and misinformation” – providing censorious justifications for tyrants across the world. (German’s NetGZ law inspired new online censorship legislation in Russia, Kyrgyzstan, and Turkey.)

Digital competition is a serious issue but new regulations will create huge new costs and barriers that large companies can afford but could be ruinous for start-ups.

A muddled bill

You are probably feeling a tad confused at this point. Most people are, including the ministers and civil servants who are trying to design the scheme.

There has been a flurry of activity in this space over the last four years. Countless speeches, parliamentary debates and inquiries. A green paper, a white paper, an initial response and then a full one. Each throwing out new ideas and contradicting each other.

We are now on the cusp of a much-delayed bill, but the precise details remain unclear. The truth is that these are extremely complex and difficult issues, full of unacknowledged trade-offs and uncertainty.

Online speech moderation is a minefield. There is both excessive and inconsistent censorship and under moderation of genuinely unlawful speech. The best-resourced and technologically advanced companies in human history are struggling to grapple with these issues.

The automated systems have been turned up to full blast. But they are failing to distinguish between genuine debate, humour and serious harassment. Facebook recently removed a post of mine that stated “Americans are weird” under their hate speech policy. Suffice it to say that I did not realise Americans are a protected class that need to be secured from criticism.

These censorship issues will only get worse when the Government gives the power to Ofcom to dictate how private companies moderate their platforms. It will also fail to address the very real harms happening on the internet – particularly since the “dark web,” where the vast bulk of unlawful behaviour is happening, will not be in scope of the new laws.

Any offline law applies online, and, at least on paper, there’s more capacity than ever to hold the most malign accountable. Every digital activity leaves a footprint. But law enforcement can barely keep up with the reports of the most grotesque illegal content.

The current proposals will do practically nothing to address this issue: if anything it will make it more difficult to discover problematic content by ensuring it is removed before it can be catalogued. This is a particular issue for people trying to track abusive behaviour and international journalistic organisations who use grisly YouTube videos to track war crimes.

Additionally, speech that appears “harmful” in some circumstances can be either necessary elements of public debate or therapeutic in other contexts. Take young women and self-harm. It is not uncommon to form survivor groups and post scars in support of recovery from a difficult mental health crisis. A mandate on firms to remove material promoting self-harm, which may appear unobjectionable, would have to distinguish between actual promotion and much-needed discussion.

In a new paper from the Free Speech Union – that I co-authored with Radomir Tylecote, Victoria Hewson and Bryn Harris – we highlight the manifest threats to freedom of speech presented by the Government’s proposals and call for a new, more effective approach.

To start, “offensive” and “legal but harmful” speech and “disinformation and misinformation” should be entirely removed from the framework. These are expansive concepts and would mean a parallel legal system in which speech is more restricted online than offline. The public, not the state, should be allowed to separate truth from fiction even if that means the cause of offence on the way.

Even if these powers were used sparingly by today’s Government, there is nothing stopping them from being wildly expanded by a future set of ministers who have less concern for free speech. Once you create a framework for censorship of previously legal speech the limits are never-ending. Removing lawful speech would allow the Government and digital firms to focus on serious issues like child exploitation and terrorism, rather than who offended who.

Ofcom should be mandated to pay due regard to freedom of expression and be prevented from punishing a company that refuses to remove content that is protected under English common law or the European Convention on Human Rights. Ofcom should also be required to produce free expression impact assessments and include free expression in all codes of practice. This will help ensure that there is not excessive pressure on companies to censor speech.

For illegal speech, the focus should be on the perpetrators. Law enforcement agencies should be better resourced to deal with serious unlawful behaviour and sentences should be reviewed where necessary. The Home Office should also consider issuing guidance for use of Injunction to Prevent Nuisance or Annoyance (IPNA) against serial offenders or abusers in the digital space. When there is evidence of substantial unlawful behaviour, this could restrict an individual’s ability to use a social media platform.

The impetus to ‘do something’ is pushing the Government down a dangerous track. It’s time to rethink the approach in a way that both better protects free speech and does a better job targeting unlawful behaviour.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Donate

Recurring Payment

Thanks for your support

Something went wrong

An error occured, but no error message was recieved.

Please try again, or if problems persist, contact us with the above error message. We apologise for the inconvenience.

Matthew Lesh is the head of research at the Adam Smith Institute.

Columns are the author's own opinion and do not necessarily reflect the views of CapX.