31 March 2021

Online radicalisation won’t be stopped with the click of a button

By

Big Tech is under pressure as never before to stop augmenting violent extremism through online platforms that fail to restrict access to materials and people who groom potential terrorists. After the Capitol Hill insurrection, animated by social media, the pressure is on for a fix before legislators step in with little regard for tech giants’ bottom lines. It’s far from simple though. The unintended consequences of one such initiative played out this week in Washington DC as legislators heard about Google’s ‘redirect’ experiment.

Redirect as a concept seems to make eminent sense, as far as it goes. The experimental principle, developed by Moonshot CVE, a British counter extremism company with big ambitions, was adopted by Google. People who used the platform to search online for extremist material using combinations of keywords were ‘baited’ by advertising then automatically redirected to sites selected to counter violence and hatred. One such site was run by a man with the pseudonym of ‘Beau the fifth column’ described by one House legislator as a completely inappropriate role model. “They sent people who were already looking for violence to a convicted felon with anarchist and anti-Semitic views.” Apparently Beau, who goes by the real name of Justin King, was unaware of the problematic traffic being pushed his way. He denies being in favour of violence.

This innovative approach reveals two problems. The first is that Google is simply seeking to outsource their exposure by employing third parties to patrol the still virtually lawless frontier of online extremism. There are numerous accounts of terrorists across the ideological spectrum who have been mobilised into violence by consuming online hatred. The evidence is overwhelming that the medium plays a role in turning alienation and grievance into violent action. Some of the accelerant is provided by Big Techs’s cash cow – the algorithms that analyse the viewers consumption patterns and interests, then direct the innocent browser to other sites including those commercial sites that pay Google, Twitter and Facebook to advertise on. However, this approach also has the effect of building a digital echo chamber for those interested in ideological violence and the disinformation/conspiracy theories that amplify it. Whatever effort is needed ought to be the sole responsibility of the big platforms. This is a fundamental design fault that obviously cannot be fixed by the sanitising input of partners. The cheap camouflage ain’t working.

The second problem is the more wicked. We must not be fooled into thinking that initiatives like ‘redirect’ are any more than an attempt to treat the symptoms of a much deeper malaise. The Covid crisis has fastened isolated people in front of their screens worldwide. The pandemic has aggravated a pre-existing threat – credulous, alienated and vulnerable targets who are more available than ever for online grooming. What puts them in a position to be available is a much more urgent and intractable question than what happens next. While there is no doubt that, for example, the Christchurch terrorist Brenton Tarrant was hugely influenced by hateful material he consumed online, the precursors that drove him to a state where he dehumanised then massacred dozens of innocent people were firmly in place beforehand. Radicalisation is certainly augmented by the internet but much of the contact needed to convert it into murder still takes place in the real world. The individual pathologies that drive people into terrorism are much harder to discern and treat than a simple tech fix.

The exposure of ‘redirect’ as a flawed idea will be embarrassing but I hope it doesn’t deter companies like Moonshot CVE taking risks. Countering violent extremism is a profoundly complex and risky business. The company rightly says that agents of influence who can drive people away from violent extremism will (and probably must) include those who have problematic backgrounds but who can relate to next generation jihadis and right/left wing extremists.

If vetting of people who can model non-violent alternatives becomes an exercise in extreme risk aversion it is unlikely to be any use. We can’t know the numbers of people the initiative might have helped. However, without the Big guns of Google, Twitter and Facebook fully deployed, the battle for online moderation will never be won. Skirmishing by well-meaning partners is not enough.

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Professor Ian Acheson is a former prison officer and Senior Advisor to the Counter Extremism Project

Columns are the author's own opinion and do not necessarily reflect the views of CapX.