NEWS

"CleanIT" – even worse than it sounds
Marta Cooper: "CleanIT" - even worse than it sounds
27 Sep 12

An EU project aimed at “reducing the impact of terrorist use on the internet” is as vague as it sounds.

This was made clear this week with the news that Brussels-based NGO European Digital Rights recently published a leaked document of draft proposals from CleanIT, a project funded by the European Commission to “counter the illegal use of the internet” through voluntary principles. Some were labelled “recommendations” and others “to be discussed”.

The emphasis is on the word “draft” here, but the fact that some of the proposals were even considered is worrying enough. Among them:

– “Knowingly providing hyperlinks on websites to terrorist content must be defined by law as illegal just like the terrorist content itself”

– “Governments must disseminate lists of illegal, terrorist websites”

– “Internet companies must allow only real, common names.”

– “Social media companies must allow only real pictures of users.”

Beyond being vague, such principles would have dire implications not only for the privacy of the user but also their safety. The Electronic Frontier Foundation, who have written a thorough run-down of the impact of the proposals here, has long argued that user anonymity is crucial in protecting the safety of activists, whistleblowers or victims of abuse. That safeguard would be threatened if social media companies were to enforce a real-picture policy for users who would otherwise prefer (or need) to protect their identity.

Other suggestions are so far-removed from the open nature of the internet and demonstrate such a minimal understanding of how it actually works that you wonder how they reached suggestion stage. One passage flagged “to be discussed” reads:

Legislation must make clear Internet companies are obliged to try and detect to a reasonable degree (…) terrorist use of the infrastructure and can be held responsible for not removing (user generated) content they host/have users posted on their platforms if they do not make reasonable effort in detection.

Yet relying on internet companies and intermediaries (such as search engines) to police the web is problematic. Besides essentially shooting the messenger by taking issue with the host rather than the user, it is impractical to expect internet companies have the manpower to sift through the bottomless pit of content that gets uploaded to their sites every day (for YouTube, that’s 72 hours of video uploaded every minute).

Another misguided suggestion, as technology pundit Glyn Moody has pointed out, is that of a “reporting button” for browsers or operating systems. This will:

…send a signal to the Internet company involved, which will take appropriate action; The system will also send a signal to LEA [law enforcement authority], which after some time will check whether it is satisfied by the Internet company and could chose to start a formal notice and action procedure, Governments will start drafting legislation that will make offering such a system to Internet users obligatory for browser or operating system service company as a condition of selling their products in this country of the European Union.

Moody goes on to say that such proposals “don’t take account of how open source browsers or operating systems are created and distributed.” He adds:

There isn’t always a “company” that produces such code. In other words, the entire mental structure of the person or persons putting forward the above proposals seems innocent of the idea of free software and what that freedom necessarily means. A cynic might wonder whether open source would even allow to be used in a world ruled by Clean IT ideas…

The ease with which we communicate online evidently offers both opportunities for change and threats to freedom. It’s reasonable, then, that governments, civil society groups and others would want to discuss how the internet grows while best protecting the safety of its users.

But doing so with vague language citing a need to guarantee security — a method also adopted in the drafting of the UK’s Communications Data Bill — is not only misguided and illiberal, but will risk hampering the innovation that has allowed our internet to flourish as it has.

Marta Cooper is an editorial researcher at Index. She tweets at @martaruco