9 April 2019

Can the government nudge us towards a better internet?


Three years in the making and dogged by delays, the government today finally announced its white paper on internet harms, though not without a touch of irony the paper was leaked on Friday, presumably to ensure that Brexit didn’t swallow yet another piece of policy.

There had been strong indications that the government had run out of patience and it was time to regulate. Today we discovered a little more about what that regulator might be tasked with.

The biggest unknown before today was scope. The companies that would be included, the groups of users that would be prioritised and the harms that were going to be subject to new regulation.

The remit of any new regulator will be huge. Social media companies were the most predictable targets, but the paper extends the threat of regulatory action to search engines, to file hosts and to messaging services. Taken together, the largest of these platforms process an eye-watering volume of content, leaving aside the thousands of smaller platforms fulfilling the same roles.

From gang violence to political advertising, whoever ends up tasked with this – whether it’s Ofcom or a new body – is facing a Herculean challenge. It is hard to imagine an organisation capable of handling digital literacy, child sexual abuse imagery, state-sponsored disinformation and algorithmic bias under one roof.

The harms identified by the government were a mixed bag. There was a lot that was already illegal: terrorism, child abuse imagery and so on. These are crimes, should be treated as such, and there is good evidence that technology companies treat them seriously. Where the waters are muddier are in the second tranche of harms: cyberbullying, trolling, disinformation, broad ‘harms’ that sit on a spectrum of legality. There remain serious questions as to the role of government in deciding what facts and fictions are acceptable. Questions of privacy are largely relegated to future consultations, though for now the paper concedes its proposed regulations will not apply to private communications.

When quizzed about what success might look like, however, there was some cause for hope. What once might have been measured in clicks and takedown counts was expressed in a call for broad systemic change: a new approach to developing digital technology that puts user experience first.

If this turns out to be an effect of regulation – and it’s a big if – we might have cause for cautious optimism. Counting pieces of offending content and the speed at which companies remove them is at once fraught with difficulty and likely a poor measure of platform health. It demands the kind of transparency that will take years of legal wrangling, and the kind of resourcing that will take decades of staffing and training to achieve. It demands evidence, and evidence also takes time and requires access. It’s not to say that this kind of oversight is impossible, but the effort required would likely relegate change to something our kids might feel, rather than internet users today.

But if a healthy, safe platform is measured through systemic or architectural or cultural change, we might get somewhere sooner. Amid the medieval-sounding invocations of various online demons and spectres, made as much for the camera as for the paper itself, there was a sense from Sajid Javid, who launched the white paper, that at its core, the proposed legislation should nudge technology providers into reassessing how they build their products. That a relentless, venture capital-driven lust for profit could no longer come at the cost of the countries, communities and individuals who rely on the web in their millions. Prioritising spaces over speech and communities over content might well put us on the path to a better internet.

Alex Krasodomski-Jones is Director of the Centre for the Analysis of Social Media.