The violent storming of the Capitol by a mob of Trump supporters took place in the smouldering ruins of our public sphere. The front pages of the Internet carried incitements to violence and conspiracy theories, haphazardly labelled by desperately inadequate platform initiatives. Live coverage shared online by neo-Nazis played across every major news network.
The days of Trump’s presidency are numbered, and the same is probably true of his beloved Twitter account. When the President finally spoke, his cocktail of lies and thinly veiled encouragement were splattered all over our screens until, for the first time, the platforms made the unprecedented decision to shut him up. As years of bubbling extremism came to a boil, there was a bitter irony to the sanitised language of the platform’s policy team: as smoke rose from the Capitol, this Tweet posed “a risk of violence”.
The violence at the Capitol building was desperately predictable. Facebook, Twitter and Twitch delivered a payload carefully stage-managed on Parler, Gab and TheDonald.win. Reporting by Bellingcat the day before the attempted coup showed Trump supporters, extremists and neo-Nazis openly discussing plans to storm the Capitol building, to smuggle guns into DC, even to take hostages and stage executions. The rioters streamed their crimes. A man dressed as a QAnon shaman took selfies in the Speaker’s chair.
Efforts by the major platforms to exclude these violent extremists might well have been successful when judged by the rapid rise of Nazi-friendly alternatives, but an engagement-driven business model leaves platforms no choice but to cover extremists’ every move in bitter, uncensored technicolour.
Social media platforms are driven to promote and disseminate extremism, and extremism has been utterly reshaped by this technology. The enormous success of global platforms like Facebook and Twitter, driven by clicks and attention, has shaped a political extremism geared to those same goals. A Facebook experiment in demoting posts judged ‘bad for the world’ was effective – except in one way, which led to a ‘different approach’ being taken: it reduced the time users spent on Facebook. The performative politics of confederate flags and Nazi memorabilia, selfies and livestreams, are the symptoms of a new media, a new politics.
We’re still prescribing for these symptoms, not their causes. Fact-checking initiatives, digital literacy efforts and health labels are at best plasters and painkillers. At worst they are a dangerous distraction. A recent Facebook effort to redirect extremists searching for radical content identified a paltry 57,000 users, of whom fewer than 4% even engaged with the process and fewer than 0.1% made it through the process. Bafflingly, the programme was judged to be “broadly successful”.
Evaluations of the platform’s fact-checking efforts are similarly disappointing. We shouldn’t be surprised. These initiatives are like barnacles clinging to an oil tanker: below the waterline, just about visible and incapable of changing the direction of the ship. If they did, they’d be scraped off.
Throwing engagement limits, labels and suspensions for a few hours at the problem might ease some of the public pressure on social media companies to ‘do something’. But the Trumps of the world will still give speeches broadcast to millions: extremist organisers and conspiracy theorists, even if turfed out of one platform, will find their way to another and draw others in with them: we will still refresh our feeds for the latest story, the most outrageous video. Treating the problem as a few bad apples online rather than an entire ecosystem means that the toxic roots are never fully addressed – with violent and tragic consequences.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.