13 Nov 2012 | minipost, News and features
Google’s new transparency report reveals government requests for user data and takedowns are on the increase
Today the search giant updated its bi-annual report with requests from January to June 2012. In a blog accompanying the report a Google analyst said:
This is the sixth time we’ve released this data, and one trend has become clear: Government surveillance is on the rise.
In the first half of 2012, the internet giant received 20,938 demands for user data from government proxies around the world — a 33 per cent increase from the same period last year.
Take down requests from government entities are also on the rise, government administrators made 1,789 demands to remove 17,746 items. Google also released details of some of the UK removal requests:
- We received a request from a local law enforcement agency to remove 14 search results for linking to sites that criticize the police and claim individuals were involved in obscuring crimes. We did not remove content in response to this request. In addition, we received a request from another local law enforcement agency to remove a YouTube video for criticizing the agency of racism. We did not remove content in response to this request.
- The number of content removal requests we received increased by 98% compared to the previous reporting period
In a policy paper released last week Index expressed serious concerns about the rapid increase in the number of governments and government surrogates who use takedown requests to silence critics.
28 Aug 2012 | Digital Freedom, Uncategorized
December will see the World Conference on International Telecommunications (WCIT), organised by the International Telecommunication Union (ITU), a specialised UN agency that sets standards for international telephony. The Dubai-based conference will bring together 190 nations and, while members have been meeting behind closed doors, various policy proposals have been leaked by activists on the website WCITLeaks.
There are huge decisions at stake over the future of internet governance, with the battle lines being drawn between governments that see the access to information as a matter of human rights, and others that consider the control of information to be an issue for the state.
Russia and China have been putting forward proposals to regulate certain areas of the web — citing the old axioms of crime and security, for one. These are areas which are currently unregulated due to, as Rebecca MacKinnon writes, a “lack of international consensus over what those terms actually mean or over how to balance enforcement with the protection of citizens’ rights.” Of course, this is not the first time these two nations have banged that drum against Western domination over such institutions or asserted their national sovereignty over cyberspace.Nor is it just authoritarian regimes with patchy human rights records that are citing these as justification for national control of the web. A year ago, Brazil and South Africa called for a global internet governance body to be located within the UN system.
Opponents believe such proposals encroach upon the free and open nature of the internet. If the governance of the internet were in the hands of a UN body, this trend of individual nations exerting overt censorship will be strengthened. Russia’s creation last month of a blacklist of websites that promote drugs or suicide or contain porn or “extremist” materials is just one example of a trend in which free expression is continually chilled. China, a country of 500 million internet users, also finds sophisticated ways of censoring the web (see Dinah Gardner’s thorough explainer here).
Yet the current multi-stakeholder approach is not without its problems, either (MacKinnon gives an illuminating rundown of the current governance ecosystem here). As Katitza Rodriguez of the Electronic Frontier Foundation noted at a panel this summer, “a large part of the world’s population feels excluded from international Internet policy making venues.” While this is certainly the case, this exclusion is exacerbated when restrictive internet policies are imposed on the world by a handful of governments pursuing a national agenda.
A major challenge will be diversifying the multi-stakeholder model to include more voices who are not only the most affected by but also vulnerable to repressive internet policies, as MacKinnon has highlighted.
But as actors work out which governance model suits the web — and freedom of expression — best, December’s conference, as Index trustee John Kampfner writes, marks “just the start of the battle between those who wish to keep the internet (relatively) free and those who will do everything in their power to reverse the process.” More power games lie ahead in the fight for online freedom.
Marta Cooper is an editorial researcher at Index. She tweets at @martaruco
15 Aug 2012 | Uncategorized
This debate was originally published at www.newstatesman.com
YES
Andrea Leadsom MP Conservative Member of Parliament for South Northhamptonshire
There is a need for drastic action to be taken to prevent young people being exposed to disturbing material on the internet.
The majority of today’s parents know less about technology than their own kids do, and have little control over the internet content their children can access. It’s not just pornography that is a problem; the internet is full of inappropriate material, including material on self-harming, anorexia, bomb making sites and suicide sites.
Society has long held the view that we allow parents the right to “hold power” over their own children in order to protect them, to educate them and keep them from the harsher realities of the world until they are mature enough to handle them properly.
This right is being undermined by the rapid and exponential progression of internet-enabled technology, and few parents feel confident that they are adequately protecting their children as they browse.
There are two sound ways to ensure that children are not exposed to dangerous or disturbing content. At the level of Internet Service Provider, individual sites can be blocked ‘at source’ by ISPs taking the initiative and offering filters for adult sites and offering to block various forms of selected content, tailored to the individual needs of the household. This would have to extend to mobile internet providers, who are still a long behind.
There should be a range of choices on what content to block, from pornography and self harm to bomb making websites. Adults choose from a variety of providers and pay for the internet services they use, so should be able to change it at will. ISPs could introduce different passports for different family members as well.
One of the imaginative ways this has been accomplished is by TalkTalk, who offer a ‘HomeSafe’ service to parents which allows different filter levels for a variety of content, and is completely customisable and controllable by the end user.
The other way that things could be changed is with a move away from the standard .co.uk and .com Top Level Domains (TLD) for more explicit content, to separate entirely inappropriate sections of the web. Already there is a .xxx TLD available for pornographic websites, which would mean that a parent would simply have to be given the option to block all websites which include this ending. Another alternative would be a “.18” TLD, applicable to any age-sensitive information.
There is a view that the internet is in need of a monitor for obscene and adult websites. Outside of cyberspace, we have bodies such as Ofcom and the British Board of Film Classification that continually work to ensure our children are not exposed to the wrong things. This could be implemented in some way online, whereby a website would have to have its content “rated” before being accessible online. While it sounds like a massive leap, the majority of new websites already go through testing when they are hosted to make sure that a site is intact and that files and content are free of viruses. This would simply be adding another check to the list, and in reality it is a burden already carried by film makers.
NO
Padraig Reidy, news editor Index on Censorship
In May of last year, as fighting raged on the streets of Sana’a, Yemen, Index on Censorship’s correspondent there emailed me to ask if I had any problems getting onto her blog, where she regularly posted articles and video. I could view the site in London, but neither she nor anyone else in Yemen could.
After a small bit of digging, we found the problem: the Canadian company that supplied filtering technology to several Arabian peninsula countries had blocked the entire blogging platform Tumblr after complaints that it carried pornographic content.
This is a simple example of the dangers of handing over the power of what you can and cannot view on the web, a proposal being put forward by Conservative MP Claire Perry.
A feature of censorship in the modern democratic world is that it is often carried out with the best of intentions. Where once our blasphemy laws protected the ultimate power (who apparently needed our help) now we design initiatives to protect the vulnerable: women, minorities and above all, children.
But the reasonableness, the niceness of the motives can make the proposed solutions almost impossible to critique without the conversation being drowned by a chorus of Helen Lovejoys insisting that Someone Please Think Of The Children. I can recall once appearing on a BBC discussion show where a self-appointed moral guardian informed me that it she felt obliged to protect children (the implication being that anyone who disagreed with her meant harm to children).
Let’s work on the assumption that we all want to protect children from the many weird and unsavoury things on the Internet (You don’t? You monster!): is off-the-shelf automatic filtering really the best way to go about this? I’d suggest not: at very least, such technology may create a false sense of security, lulling parents into the belief that it is now utterly impossible for their children to access dubious content online. But anyone who’s ever been schooled by a tech-literate teen knows that nothing is impossible for them.
It also runs the risk of blocking harmless and even useful content – and not just reports on the Yemen uprising. When a list of blocked sites maintained by ACMA, The Australian Communications and Media Authority, was leaked in 2009. About half of the list consisted of legitimate sites that would not normally be blocked, including a MySpace page and the homepage of a dentist.
Automatic filters can also mean users fall foul of what is known as the “Scunthorpe problem” (think about it), and gay rights sites can easily get classified as pornographic.
It is not unreasonable to request that companies make technology available that helps parents control what is viewed by their children. But the choice must ultimately be in the hands of parents. We tend too often, with technology-based problems, to imagine that the solution must also be technology based. But the issue here is words and pictures, not bits and pixels. We keep an eye on what our children eat and drink, what books they read and what television they watch – and we would resent a private company that does not know our child having the power to do so. The same real-world watchfulness is the only way of keeping children safe online.
9 Aug 2012 | Digital Freedom, Uncategorized
The nice people at the New Statesman asked me to take part in a debate on web filtering with Conservative MP Andrea Leadsom this week. You can read the whole thing here. It’s certainly worth reading Leadsom’s arguments, as she does represent a significant body of opinion.
What I find interesting about the viewpoint of Leadsom and others is a curious faith in technology. It’s an odd take on cyber-utopianism. While they clearly do not believe that technology is the ultimate liberating force, they still seem to believe that the best way to counter the great wash of “inappropriate” [Leadsom’s word] content on the web is more technology. It’s as if they’re engaged in a pornographer versus guardian arms race, and have long lost sight of the actual aim.
Contrast this with what our China correspondent Dinah Gardner writes today on how the Communist Party, which is far more serious about censorship than Andrea Leadsom, handles the issue. While they do employ technology, they also employ thousands of people to monitor and delete content. They’ve realised that algorithms can only achieve so much.
Humans in the main resent authoritarian regimes because they treat us like we’re children: but when we are talking about actual children, then the debate changes slightly. We’ve pretty much accepted that we can put some limits on the rights of children — particularly on what information they can access. But the most developed filtering program in the world is no replacement for an interested adult taking care of a child’s education and entertainment.