NEWS

Jihad trending: Analysis of online extremism and how to counter it
With fears intensifying over the potential impact of returning foreign fighters and potential ‘lone wolf’ terrorists, governments are increasingly targeting the internet as a source of radicalisation. Dr Erin Marie Saltman of the Quilliam Foundation writes
23 May 14

Censorship is central to the current debate on how to counter extremism online. With fears intensifying over the potential impact of returning foreign fighters and potential ‘lone wolf’ terrorists, governments are increasingly targeting the Internet as a source of radicalisation. However, using negative measures such as censorship only attack the symptom rather than the array of its causes. Findings from our recent Quilliam report show that censorship initiatives not only prove ineffective in tackling extremism, but are potentially counter-productive.

Governments still rely largely on negative measures (such as filtering, blocking and censoring) when tackling extremist content on the Internet.  Yet censorship has never been the solution and nor will it be. The debate around what content is legal and illegal online, and how authorities should deal with it, is sensitive and has brought up a range of questions in the last ten years as the Internet has expanded, changing the way we communicate, educate and socialise.

Across Europe, filtering Internet traffic with a view to blocking access to unwanted content has generated discussions of free speech and the legality of censorship. While some filtering is currently nation-specific (France and Germany filter and block content that is related to Nazism or Holocaust denial), other filtering trends are EU-level initiatives (such as the filtering and attempted eradication of Child Sexual Abuse Imagery – CSAI). Although no one is likely to argue for the continuation of CSAI-related content, there is a much broader debate to be had around the subject of what to do with so-called ‘extremist content’.

British and French governments have set up online portals in recent years allowing the public to anonymously report potentially illegal websites and materials. The UK’s Counter Terrorism Internet Referral Unit (CTIRU), for example, runs an official portal allowing individuals to submit ‘online terrorist material’ which is subsequently reviewed and flagged to Internet Service Providers (ISPs). The CTIRU has, thus far, removed over 29,000 pieces of illegal terrorist propaganda in the last three years.

While this number might sound high to some, it pales in comparison to the amount of illegal content online, which continues to grow exponentially. Furthermore, it is more or less impossible to calculate the true quantity of content that might fall under specific guidelines of illegality, and it is even more difficult to try and quantify content that might be deemed ‘extremist’.

Even if this were a possible route for eliminating unwanted content, the fact of the matter is that the most potent forms of online extremism are not taking place on static websites or platforms (the principal platforms that filtering initiatives target). It shouldn’t come as a surprise that, following mainstream social trends, extremist organisations and their supporters are most effective and active on social media platforms, which are more difficult to target through filtering systems as well as being more temporal, so that once a conversation has been had, or a video shared, filtering does little to obstruct the message being disseminated.

Furthermore, the blocking or censoring of material is also ineffective in ensuring that the same unwanted content will not reappear. Websites can change domain names, content can be re-posted, and the discourse around a given subject can take place elsewhere. However, even if we could find a way to rid the web of unwanted content we deem as extremist, the question remains as to whether or not we should.

While government portals such as the CTIRU site provide an outlet for the public to alert officials about material they believe to be illegal, there is significant dispute over what can be deemed extremist and worthy of filtering. The adoption of an agreed definition of extremism continues to be deeply problematic.

Terrorism, and terrorist-related content, is grounded in a well-defined legal structure that is articulated through the UK’s Terrorism Acts, as well as those of other countries. As such, any material which promotes terrorism, encourages individuals to enact violence against others and/or supports a terrorist organisation is deemed illegal in the UK, whether on-, or offline. However, ‘extremist material’ is a much contentious subject matter, remaining, ultimately, in the realm of ideas. We should not look to expand the already broad definition of terrorism to include extremism, and must recognise that the two phenomena, though related, are distinct, and, as such, require distinct responses.

Governments should continue to target content that falls squarely within their legal guidelines, even if the negative measures imposed will never fully abolish illegal content. Engagement with, and access to, extremist online content also informs the work of counter-extremist organisations that can use this information to develop counter-speech. Online counter-extremism should be civil society-led as the internet is in the realm of self-regulation; we should contest this ungoverned space rather than attempt to censor it.

Some groups have already established online platforms to develop counter-speech and counter-extremist content; such platforms include websites like Islam against Extremism, Radical Middle Way and the YouTube channel set up by the Against Violence Extremism network. However, movements like these need far more support and constant updating.  We need to encourage counter-extremism practitioners, community groups and local actors to contest online space and promote the same positive messages online as they do offline. This is already being done to an extent, by the likes of the US State Department’s Twitter account, @thinkagain_DOS, which directly challenges terrorist narratives online; Abdullah X, an online graphic novel that aims to make counter-extremism more fashionable than jihadism; the Google Ideas-run AVE Network, which acts as a resource to help us learn from the mistakes of former extremists; and finally, us at Quilliam, an organisation that works to hone the arguments with which to refute extremist ideologies, challenges their Manichean narratives and promotes pluralism within British Islam as an antidote to extremism.

These are just four ways that have been identified, but there are undoubtedly many more. Using such methods, we can effectively inhibit the capability of extremist narratives to monopolise online platforms which are, now more than ever, the defining spaces of public debate and expression.

This article was posted on May 23, 2014 at indexoncensorship.org

By Dr Erin Marie Saltman

Dr Erin Marie Saltman is a senior researcher at the Quilliam Foundation

READ MORE

CAMPAIGNS

SUBSCRIBE