After repeated delays and a leaky few days, the much-anticipated Online Harms White Paper was finally published on Monday. It sets out the Government’s vision for the UK to be “the safest place in the world to go online, and the best place to start and grow a digital business”. In doing so, it reflects the struggle between its joint authors – the Department for Culture Media and Sport and the Home Office – in striking the right balance between competing goals. In practice, the White Paper leaves us with many grey areas yet to be made clear.
A new duty of care is proposed in the Paper, giving online platforms statutory responsibility to prevent its users from coming to harm. Compliance with this duty of care will be overseen and enforced by an independent regulator which will issue codes of practice. These codes will not just cover what is already illegal online, but also what is simply deemed unacceptable.
Extending the duty to legal content opens up the question of who content is harmful to. For example, a violent Peppa Pig parody video might be harmful to a young child watching it, but not affect an adult who is able to process what they are watching. Likewise, teaching a pug to perform a Nazi salute may show poor taste, but it isn’t clear that doing so actually harms anyone or incites such harm.
The boundaries become even less clear when considering what may count as politically motivated disinformation. Beyond video content, where should the line be drawn for controversial Reddit threads or discussions of difficult topics on Mumsnet? No two platforms are the same and therefore establishing an appropriate precedent for harm in each case will be no easy task.
Imposing sanctions on platforms for content that may be harmful to a particular vulnerable group may therefore create precedent for sweeping and inconsistent censorship powers. The internet is awash with content that may be unpalatable but still legal creative expression. There is no clear indication in the White Paper of how wide or how deep the measures would go in curtailing freedom of speech.
With ambiguous rules but the potential for substantial fines and individual liability of senior management, platforms seeking not to fall foul of their duty of care obligations are likely to be over-cautious. Stringent automated upload filters for sites like YouTube to deal with the over 400 hours of video uploaded to the site each minute would be necessary to stem the tide. Sophisticated algorithms for detecting nuanced concepts of harm are unlikely to be perfect. They may also stifle our brilliant capacity for satire and sarcasm, or filter out legitimate reporting of important real-world events that are in the public interest.
The reality of platforms having an automatic duty to moderate all that happens on their site brings the aim of making us the safest country to be online into conflict with being the best place to start and grow a digital business. Challenger websites looking to disrupt the hegemony of Facebook and Google will face much higher proportionate costs to filter their content, both in paying for sophisticated AI software and to employ staff to oversee compliance.
Further costs may be incurred if the code of practice end up including the requirement for fact-checking services to combat disinformation and fake news. Far from limiting the reach of big tech firms, the new rules may serve to entrench their dominance as start-ups are mired in red tape. It seems likely it will remain at the discretion of the regulator whether they focus on the largest platforms or apply the same standards to all.
And then comes the question of money. The Paper is notably thin on how it intends to fund its all-powerful regulator, whether it augments Ofcom or creates an entirely new body. Despite algorithms for catching harmful content constantly advancing, human reporting is still a key part of the process. For a regulator to stay ahead, it will need some serious resources behind it. Fees on offending platforms may yield significant amounts of money but income from this source cannot be guaranteed. Threat of sanction should ideally be enough to coax platforms into compliance in most cases. Various other options including charges or a levy on companies whose services are covered by the regulator are mentioned as up for consideration, but only in a cursory way.
Whilst it is encouraging to see protection of freedom of expression as a key part of the document’s vision, there isn’t much in there about how that will actually be achieved. Much more meat is needed to make clear how far these regulations will actually go, and how heavily they will be enforced once their regulator is emboldened to act.
The clamour to do something must not trump the need to temper what we do in the interests of everyone freely using the internet. As the Paper points out – the world is watching what we do here. Being one of the first out of the blocks with a plan to tackle online harm isn’t easy, but a lot more work is needed before we know whether or not we’ll get it right.