6 February 2024

A social media ban is not the way to protect children online

By

Tragedies make for bad policy recommendations, and the murder of the teenager Brianna Ghey is no different. Last week Ghey’s murderers, who were both 15 when they stabbed Ghey to death in Cheshire almost a year ago, received life prison sentences. One of the murderers, Scarlett Jenkinson, had watched videos of murder and torture, which no doubt contributed to Ghey’s mother, Esther Ghey, calling for a ban on children under the age of 16 being able to access social media apps. Esther Ghey is not alone. Conservative MP Miriam Cates and Shadow Foreign Secretary David Lammy have called for the same policy. In a recent LBC appearance Lammy suggested that bans on children consuming and purchasing tobacco and alcohol could set a precedent for a similar ban on social media. However, while no person with a functioning moral compass can react with anything other than horror to Brianna Ghey’s murder, we should be hesitant to call for policies that rest on weak analogies, will result in dangerous unintended consequences, and weaken the already fragile state of free speech in the UK. 

A ban on under-16s accessing social media may at first glance sound sensible. After all, we know that sometimes children can and do view material on social media apps that is not appropriate or healthy. Pornographic and self harm content is among the most discussed, but there is also widespread concern over social media’s role in facilitating bullying and worsening social anxiety and depression. Given that, why shouldn’t we pursue policies similar to other products that we know can harm children, such as alcohol and tobacco?

One of the problems with a ban on social media apps for children is determining who is under 16 and what constitutes harmful content. Age verification methods vary, but they all must balance privacy with effectiveness. A social media site that asks users to click a button claiming that they are over 16 would not be very effective, but it has a comparatively minor effect on children’s privacy. On the other hand, an age verification method that requires children to upload photo ID or undergo a facial recognition/analysis scan might be more effective but may intrude into children’s privacy more than checking a box or clicking a button. 

And the effectiveness of such measures should not be overstated. Facebook and Instagram use Yoti, a facial analysis service that boasts that it can correctly identify people 13 to 17 years old as being under 25 with 99.93% accuracy. Yet for a social media ban for under 16s to be effective such a service will have to be very good at determining whether someone is 15 or 16. Photo ID checks are one way to ensure that people are the age they claim they are, but aside from the privacy concerns associated with uploading photo IDs, not all 16-18 year olds have the same access to photo IDs, which can often be copied, faked, or stolen with relative ease. 

Some readers at this stage may argue that I am making the perfect the enemy of the good. ‘Age/ID verification methods might not be perfect’, you could argue, ‘but social media harms are so dangerous that it is worth implementing a flawed system now to mitigate the harms. We can sort out the technical flaws that affect a minority of users later.’

But even if age/ID verification methods were perfect I would still object to a social media ban on under 16s because of the nature of social media and the problems of scale and definition.

One of the most overlooked features of social media harm is that the harm of content is often dependent on its context rather than the content itself. I and others tried in vain to make this point to lawmakers considering the Online Safety Act, which passed despite referencing ‘bullying’ content and content which ‘depicts real or realistic serious violence against a person’. No doubt the drafters of the legislation sought to tackle the spread of videos showing violent crime and children being bullied. 

Yet the harm of these kinds of content is context-specific. A bully making a video of themselves harassing a classmate and sharing it with friends as part of their bullying campaign could be more harmful than an anti-bullying charity sharing the same video in promotional material seeking to highlight the harms they are working to address. Same content, different contexts. Likewise, some of the footage of the 6 January, 2021 assault on the US Capitol no doubt shows ‘realistic serious violence against a person’, but that footage can be used to highlight the risk of anti-democratic violence or to praise the wannabe insurrectionists. Same content, different context. While some content found online can be distressing, it would be naïve to assume that online content can be placed into a binary ‘harmful/harmless’ scheme. These nuances are not at work in alcohol or tobacco bans. Regular consumption of tobacco and alcohol has predictable and quantifiable effects on 11 year olds regardless of their location, hobbies, or education. 

Much of the content that is most concerning to lawmakers and parents is content that is related to suicide, self-harm, and eating disorders. This kind of content is also difficult to put firmly in the ‘harmful by definition’ category. We should expect young people suffering with suicidal ideation and eating disorders to seek information online. It would be a shame if these resources were to suddenly be unavailable thanks to bans on social media access. It would also be a shame if the millions of teens who reap the benefits of social media community creation and learning opportunities were to be cut off while their peers around the globe continue to enjoy access. 

While a few household name companies dominate debates about social media we should not forget that ‘social media’ is not easy to define. One of the reasons that the Online Safety Act was so concerning was that its attempt to regulate YouTube, Facebook, Instagram, Twitter, and TikTok also affected Wikipedia, WhatsApp, and other services not traditionally considered ‘social media’. Any enforceable ban on under 16 using social media will have to include a definition of ‘social media’. Advocates of such a ban should be pressed on what their definition of ‘social media’ is. This definition problem is not at play when it comes to tobacco and alcohol, both of which are relatively easy to define scientifically and in legislation and regulation. 

A ban on under-16s accessing social media would not only be misguided; it would send another signal to the world that the UK is hostile to free speech. The Online Safety Act already places numerous obligations on all sorts of online providers, and one of its long-term effects will be that adults in the UK will have less access to online content compared to many other adults in liberal democracies around the world. 

None of this is to be flippant about the harms social media can have for children. There are steps parents, teachers, and others can take to limit children’s access to popular social media platforms, many of which already have child safety measures in place. Teachers should be considering bans on students using phones during school hours. And perhaps parents should seek to limit their children’s access to phones during certain hours or in particular parts of their home. 

It is a shame that the Conservative Party, which traditionally has championed the institutions like the family and schools, has in recent years reached for the blunt instrument of legislation when it comes to tackling complex social issues. The Online Safety Act is only one example. Sadly, others might follow.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech at the Centre for Policy Studies.