17 July 2018

We cannot allow the internet to be a safe space for terrorist propaganda

By

It is perhaps unsurprising that the new powers recently added to the Counter-terrorism and Border Security bill have been met with criticism.

While it remains an offence to keep, publish, or distribute extremist content – both online and offline – a new “three strikes” law will apply a penalty of up to 15 years in jail for those who view terrorist-related material online three or more times. Human rights groups have argued that in some cases, this may violate Article 10 of the European Convention on Human Rights, namely the right to receive information.

Any change in legislation will be met with challenges – and rightly so – but legislation must keep up with the times. For years, the Centre on Radicalisation and Terrorism (CRT) at the Henry Jackson Society has argued that vulnerable individuals should not have easy access to extremist and instructional terrorist material. Regulating the online space is crucial to obstructing the flow of logistical information and the consumption of propaganda, both of which are critical in forming the backbone of threats to national security.

Our seminal piece of work, Islamist Terrorism, profiled 269 terrorist offenders and revealed that they commonly consumed extremist material prior to offending. Between 1998-2015, the period under examination, dissemination of terrorist publications more than trebled, reflecting the increasing role of the internet in helping vulnerable people consume jihadist propaganda.

More importantly, the internet was cited as a major platform for engaging with extremist content, and an outlet to access “inspirational” extremist preachers, graphic content, and hate material. Nearly one in ten Islamist-related offences was committed by individuals who were known to have watched beheading videos. This desensitisation towards violence is one of the greatest common factors in a person’s path to radicalisation.

Only yesterday, members of my team and I stumbled across a video on YouTube that made use of “smiling corpses” – dead bodies of fighters in Iraq and Syria – to glorify shahid (martyrdom) by Islamic State. This channel is still active on YouTube and not yet behind an age filter. While it could be argued that people who want to consume such material could also find it offline, we must ensure that people do not simply stumble across such content, and that artificial algorithms do not encourage them to view it due to their search history.

To achieve this, content that glorifies terrorism online should be removed and those who promote or advertise it penalised. Part and parcel of this is faster proscription of groups. Technology companies are often reluctant to remove material that is promoted or uploaded by groups that have not been proscribed by the government. Non-violent groups that propagate extremist views and host speakers should also be brought to the attention of tech companies, who can regulate this content off their platforms.

I would argue that one of the biggest omissions in the Counter Terrorism and Border Security Bill is the need to actively engage those who come across this material accidentally. Researchers or journalists who view or collect material for academic purposes – including our team at CRT – are exempt from the new “three strikes” law under a defence of “reasonable excuse”. But if the responsibility for consumption lies with the consumer, easier channels for reporting extremist content and those who promote it must be created.

What is needed is better evidence gathering online, to help form an understanding of profiles, groups, and networks disseminating extremist or terrorist content on multiple platforms to feed into court systems and auditing processes.

This cannot be done without research and participation from end users. The public should be able to better report and flag extremist content directly to technology companies for removal. For example, there is still no “flagging” system for users to report instructional terrorist manuals or disturbing extremist content on Google search results. On Facebook and Twitter, posts still cannot be reported as specifically containing terrorist or extremist content, only “hate content”. And as of now, any disturbing content cannot be flagged for multiple reasons: a video cannot be marked as “violent”, “hateful” or “terrorist”, for example.

A joint approach by the public, government and technology companies is much needed when it comes to regulating content online. But all of this is unlikely if those viewing terrorist content can do nothing about it.

Nikita Malik is Director of the Centre on Radicalisation and Terrorism at the Henry Jackson Society.