Filtering in the UK: The hinterland of legality, where secrecy trumps court rulings

(Shutterstock)

(Shutterstock)

James Brokenshire was giving an interview to the Financial Times last month about his role in the government’s online counter-extremism programme. Ministers are trying to figure out how to block content that’s illegal in the UK but hosted overseas. For a while the interview stayed on course. There was “more work to do” negotiating with internet service providers (ISPs), he said. And then, quite  suddenly, he let the cat out the bag. The internet firms would have to deal with “material that may not be illegal but certainly is unsavoury”, he said.

And there it was. The sneaking suspicion of free thinkers was confirmed. The government was no longer restricting itself to censoring web content which was illegal. It was going to start censoring content which it simply didn’t like.

If you call the Home Office they will not tell you what Brokenshire meant. Does it mean “unsavoury” material will be forced onto ISP’s filtering software? They won’t say. Very probably they do not know.

There is a lack of understanding at the Home Office of what they are trying to achieve, of how one might do so, and, more fundamentally, of whether one should be trying at all.

This confusion – more of a catastrophe of muddled thinking than a conspiracy – is concealed behind a double-locked system preventing any information getting out about the censorship programme.

It is a mixture of intellectual inadequacy, populist hysteria, technological ignorance and plain old state secrets. And it could become a significant threat to free speech online.

The Home Office’s current over-excitement stems from its victory over the ISPs last year.

Ministers, from New Labour onward, have always tried to bully ISPs with legislation if they refuse to sign up to ‘voluntary agreements’. It rarely worked.

But David Cameron positioned himself differently, by starting up an anti-porn crusade. It was an extremely effective manouvre. ISPs now suddenly faced the prospect of being made to look like apologists for the sexualisation of childhood.

Or at least, that’s how it was sold. By the time Cameron had done a couple of breakfast shows, the precise subject of discussion was becoming difficult to establish. Was this about child abuse content? Or rape porn? Or ‘normal’ porn? It was increasingly hard to tell.

His technological understanding was little better. Experts warned that the filtering software was simply not at the level needed for it fulfill politicians’ requirements.

It’s an old problem, which goes back to the early days of computing: how do you get a machine to think like a person? A human can tell the difference between images of child abuse and the website of child support group Childline. But it has proved impossible, thus far, to teach a machine about context. To filters, they are identical.

MPs like filtering software because it seems like a simple solution to a complex problem. It is simple. So simple it does not exist. Once the filters went live at the start of the year, an entirely predictable series of disasters took place.

The filters went well beyond what Cameron had been talking about. Suddenly, sexual health sites had been blocked, as had domestic violence support sites, gay and lesbian sites, eating disorder sites, alcohol and smoking sites, ‘web forums’ and, most baffling of all, ‘esoteric material’.  Childline, Refuge, Stonewall and the Samaritans were blocked, as was the site of Claire Perry, the Tory MP who led the call for the opt-in filtering. The software was unable to distinguish between her description of what children should be protected from and the things themselves.

At the same time, the filtering software was failing to get at the sites it was supposed to be targeting. Under-blocking was at somewhere between 5% and 35%.

Children who were supposed to be protected from pornography were now being denied advice about sexual health. People trying to escape abuse were prevented from accessing websites which could offer support.

And something else curious was happening too: A reactionary view of human sexuality was taking over. Websites which dealt with breast feeding or fine art were being blocked. The male eye was winning: impressing the sense that the only function for the naked female body was sexual.

It was a staggering failure. But Downing Street was pleased with itself, it had won. The ISPs had surrendered. The Washington Post described it as  “some of the strictest curbs on pornography in the Western world” –  music to Cameron’s ears. Suddenly the terms of the debate started shifting. Dido Harding, the chief executive of TalkTalk, was saying the internet needed a “social and moral framework”.

So instead of proving the death knell for government-mandated internet censorship, the opt-in system became a precursor for a more extensive ambition: banning extremism.

If targeting porn without also blocking sexual health websites was hard, countering terrorism was even more difficult. After all, the line between legitimate political debate and inciting terrorism is blurred and subjective. And that’s not even to address other pieces of problematic legislation, such as the Racial and Religious Hatred Act 2006, which bans incitement to hatred against religions.

Even trying to block what everyone agrees is extremist content is highly controversial. Anti-extremism group Quilliam and security experts at the Royal United Services Institute have warned that closing websites were people are liable to being radicalised actually hinders intelligence services.

A lot of what we know about Brits going off to fight in Syria or elsewhere comes from the fact they write it on message boards. Blocking them just reduces your ‘intelligence take’. Groups like Quilliam also use those sites to go in and engage with people, offering them a ‘counter-narrative’. Blocking the sites prevents them doing their work.

The Home Office mulled whether to add extremism – and Brokenshire’s “unsavoury content” – to something called the Internet Watch Foundation (IWF) list.

The list was supposed to be a collection of child abuse sites, which were automatically blocked via a system called Cleanfeed. But soon, criminally obscene material was added to it – a famously difficult benchmark to demonstrate in law. Then, in 2011, the Motion Picture Association started court proceedings to add a site indexing downloads of copyrighted material.

There are no safeguards to stop the list being extended to include other types of sites.

This is not an ideal system. For a start, it involves blocking material which has not been found illegal in a court of law. The Crown Prosecution Service is tasked with saying whether a site reaches the criminal threshold. This is like coming to a ruling before the start of a trial. The CPS is not an arbiter of whether something is illegal. It is an arbiter, and not always a very good one, of whether there is a realistic chance of conviction.

As the IWF admits on its website, it is looking for potentially criminal activity – content can only be confirmed to be criminal by a court of law. This is the hinterland of legality, the grey area where momentum and secrecy count for more than a judge’s ruling.

There may have been court supervision in putting in place the blocking process itself but it is not present for individual cases. Record companies are requesting sites be taken down and it is happening. The sites are only being notified afterwards, are only able to make representations afterwards. The tradional course of justice has been turned on its head.

The possibilities for mission creep are extensive. In 2008, the IWF’s director of communications claimed the organisation is opposed to censorship of legal content, but just days earlier it had blacklisting a Wikipedia article covering the Scorpions’ 1976 album Virgin Killer and an image of its original LP cover art.

Sources close to the ISPs say they were asked to take the IWF list wholesale – including pages banned due to extremism – and block them for all their customers, whether they had signed into the filtering option or not.

They’ve proved commendably reluctant, although their reticence is as much about legal challenges as a principled stance on free speech. Regardless, they seem to be insisting that universal blocking can only be carried out with a court order. Brokenshire is then left trying to get them to include it in their optional, opt-in filter.

We don’t know if he’s succeeded in that. The Home Office are resistant to giving out any information. They direct inquiries to the Department of Culture, Media and Sport or ISPs themselves, who really have no idea what’s going on. They refuse to answer any questions on Cleanfeed, saying it is a privately owned service – a fact which is technically true and entirely misleading.

It is not conspiracy. It is plain old cock up, combined with an inadequate understanding of the proper limit of their powers.

The left hand does not know what the right hand is doing. Even inside departments of the Home Office they do not know what they are trying to achieve.

The policy formation is weak and closed. The industry is not in the loop. Media inquiries are being dismissed. The technological understanding is startlingly naive.

The prospect of a clamp down on dissent is real. It would come slowly, incrementally – a cack-handed government response to technological change it does not understand.

We must be grateful for James Brokenshire. His slip ups are the best source of information we have.

This article was originally posted on 17 April 2014 at indexoncensorship.org

The mechanics of China’s internet censorship

Even rainstorms can be sensitive in China. The recent storm in Beijing which killed at least 77 people caused the censors to come out in force, with newspapers told to can coverage and online accounts of the deluge snipped.

But with 500 million internet users, the obvious question is, how does China do it? What are the mechanics of China’s internet censorship?

It makes things simpler if we divide the censorship first into two camps: censoring the web outside China and censoring domestic sites.

American journalist James Fallows very readable account of how China censors the outside web explains: “Depending on how you look at it, the Chinese government’s attempt to rein in the internet is crude and slapdash or ingenious and well crafted.”

Briefly, this is what happens.

Censoring incoming web pages

The public security ministry is the main government body which oversees censorship of the outside Internet through its Golden Shield Project.

The key to their control is the fact that unlike many other countries, China is only connected to the outside internet through three links (or choke points as Fallows calls them) — one via Japan in the Beijing-Tianjin-Qingdao area, one also via Japan in Shanghai and one in Guangzhou via Hong Kong. At each one of these choke points there is something called a “tapper” which copies each website request and incoming web page and sends it to a surveillance computer for checking. This means that browsing non-local websites in China can sometimes be frustratingly slow.

There are four ways for a surveillance computer to block your request.

  • The DNS (Domain Name System) block: When you enter a web page, the DNS looks up the address of that page in computer language (the IP address). China has a list of IP addresses it blocks, if your web page is on that IP address, the DNS is instructed to give back a bad address and you will get a “site not found” error message.
  • Connection Reset: Another the way the government prevents you seeing one of its blacklisted sites is not to return a bad address but to constantly reset the request, which is slightly more insidious since this kind of error can occur naturally. If it happened outside China you could press reset and the chances are the next time you would be successful. But in China the reset is intentional and however many times you resend the request you will get a “The connection has reset.”
  • URL keyword block: To cast its net even wider, the tappers also check the web address. If it contains any banned words, say “Falun Gong” or “Dalai Lama” the request is sent into an infinite loop, you never reach the site and your connection times out.
  • Content filtering: In this technique the content of web pages is scanned for banned words, with the connection timing out if any blacklisted words are found. This could for example, allow you to browse the Guardian website, but not access some of the new stories.

Censorship technology is continuously becoming more sophisticated, and words and IP addresses go on and off the blacklists.

Index contacted Jed Crandall, an assistant professor of computer science at the University of New Mexico and whose research has focused on Chinese internet censorship, to ask him if there had been much change to the above in the four years since Fallows’s article. Here’s what he said:

“It seems like filtering the content of the web pages using internet routers was not working well for the censors, and they even seemed to be devoting less resources to it over time as we did our experiments,” he told Index by email interview. “They still block IP addresses, DNS addresses, and do keyword filtering on GET requests [URL keyword block].”

Censoring domestic websites

Far more of a challenge to the Chinese government is keeping its homegrown internet in check. And this it does mostly by making sure the private companies that run most of the Chinese web self-censor by issuing threats, “vaguely-worded” laws and, in the case of emergency breaking stories, day-to-day directives.

Censoring professional content

Web companies self-censor in many different ways. Content which they produce themselves is “cleansed” first by the writer and then by editors if necessary. There are few specific censorship guidelines; it is more of an acquired habit of knowing where to draw the line based on fear of punishment. American scholar Perry Link wrote an eloquent essay back in 2003 — read it here — about how Chinese censorship is like an anaconda in the chandelier ready to pounce if someone oversteps that line:

The Chinese government’s censorial authority in recent times has resembled not so much a man-eating tiger or fire-snorting dragon as a giant anaconda coiled in an overhead chandelier. Normally the great snake doesn’t move. It doesn’t have to. It feels no need to be clear about its prohibitions. Its constant silent message is ‘You yourself decide,’ after which, more often than not, everyone in its shadow makes his or her large and small adjustments–all quite ‘naturally.’

Censoring user-produced content

This is where it gets really interesting.

“Social media is more dynamic and fluid than traditional online content, so the censors have to be creative in how they control social media,” says Crandall.

Banned topics and sensitive terms are deleted by hand by armies (literally) of paid internet “police”. This, from a paper published here in June by a team of researchers at Harvard University:

The size and sophistication of the Chinese government’s program to selectively censor the expressed views of the Chinese people is unprecedented in recorded world history. Unlike in the US, where social media is centralized through a few providers, in China it is fractured across hundreds of local sites, with each individual site employing up to 1,000 censors. Additionally, approximately 20,000–50,000 Internet police and an estimated 250,000–300,000 “50 cent party members” (wumao dang) are employed by the central government.

More evidence for the lack of a hardcopy list of banned topics is that different online companies seem to censor different things.

Crandall adds:

One thing we’ve noticed in our research is that what various companies censor seems to vary widely from company to company, and there doesn’t seem to be any obvious ‘master list’ of what companies are supposed to censor.  They seem to make up their own lists based on what they think their liabilities would be if the government had to intervene.

For example, censoring in Tibet and Qinghai (a largely Tibetan province) is much stricter than in eastern parts of the country.

Latest trends

Recent reports on Chinese internet censorship have offered some surprising results. First, the Harvard paper referred to above analysed Chinese-language blogs and found that censors were targeting material that could have incited protests or others types of mass action, leaving material critical of the government uncensored.

A recent University of Hong Kong study on Weibo (China’s wildly popular version of Twitter) posts found that the list of words was changing constantly.

“What we are finding is a constantly morphing list of keywords, a cat-and-mouse contest between people and censors,” King-wa Fu, one of the study’s researchers, told the Economist last month.

There might be more to it than a simple catch-up. According to Crandall, censorship can be used as a sophisticated tool to control the news. In a paper titled Whiskey, Weed, and Wukan on the World Wide Web: On Measuring Censors’ Resources and Motivations, when a news story reflects badly on the government, posts on it are censored, but when that news story has been turned around to a good story, the word is unblocked.

“It appears that censorship was applied only long enough for the news about Wukan to change from sensitive news to a story of successful government intervention to reach a peaceful resolution to the problem,” the paper’s authors write.

(Wukan is the name of a village in southern China where huge clashes between the police and locals occurred over illegal land grabs late last year. The government eventually caved into the villagers’ demands and then turned the story in one of a provincial government victory.)

The Future

Chinese censorship has to move with the times, particularly now there are 500 million Chinese online, many of whom are ardent microbloggers.

Crandall believes that the government is looking into how to manipulate social media to influence the news.

I think what the future of internet censorship holds is more emphasis on control and less emphasis on blocking content. It’s very difficult to block some specific topic, but if you can slow down spread of news of the topic at some times and speed up spread of news about the topic at other times you can use that to your advantage to control how issues play out in the news cycles.

MORE ON THIS STORY:

Blogger Wen Yunchao wrote for Index on Censorship magazine in 2010 about the art of Chinese censorship. Read his article here.

Also writing for our magazine, Southern Weekend columnist Xiao Shu discusses the repressive and chaotic nature of China’s internet censorship here.

Read Chinese author and blogger Han Han’s essay about publishing and censorship in China here.

 

Access and education crucial in the age of the smartphone

In the past decade, there has been a boom in mobile phone subscriptions, jumping from fewer than one billion in 2000 to six billion in 2012. Seventy-seven per cent of those subscriptions are now owned by individuals in developing countries. Digital access, on the other hand, trails far behind with only 35 per cent of the world actually online. But this is likely to improve, particularly with the rise of smartphones, which currently make up about a quarter of the 4 billion phones in use globally.

Even with expected improvement in technology and falling prices of production, increasing mobile access relies on more than simply lowering the prices of handsets. Lack of access to a mobile phone is tied to factors such as gender and economic inequality. In developing countries, for example, women are 21per cent less likely than men to own a mobile phone.

India has a high rate of mobile penetration, with 76.8 per cent of its 1.2 billion population using mobile phones. Gender norms also have a role in whose hands mobile phones fall into, as only 28 per cent of India’s mobile phone owners are women, versus 40 per cent for men.

Lower prices are expected to help make smartphones more accessible in India, as they currently only account for 10 million of the estimated 960 million mobile phone users in the country.

While Brazil has a high mobile phone penetration rate (99.8 per cent), the massive economic divide contributes to some of the challenges in mobile access in the country. According to a recent study, many residents of Brazil’s slums (favelas) share phones or steal them because of the outrageous prices of mobile phones and unfamiliarity with technology. The country also has the third-highest rates for mobile services in the world. Smartphone penetration in Brazil is at about 14 per cent, and will only increase if the price of mobile services and handsets decrease.

The United Kingdom has its own divide, with smartphone penetration at 51.3 per cent. However, ownership of a smartphone does not necessarily mean that the owner understands how to use it. Many users only use them to simply make phone calls and send text messages. Users might be unaware that their rights may be diminished through filtering and blocking that automatically comes with many smartphones in the UK. This only shows how important it is to build literacy around technology across the globe. Access also does not simply rely on prices, it also relies on 3G infrastructure.

Thanks to improved mobile phone technology, and improved networks, more people will be online, bringing us a step forward in not only increasing mobile access, but also bridging the digital divide — and that increase in availability only makes it more important to protect free expression online.

Sara Yasin is an Editorial Assistant at Index on Censorship