Right to be forgotten: A poor ruling, clumsily implemented

shutterstock_RTBF_195176492

When Europe’s highest court ruled in May that individuals had a ‘right to be forgotten’ many were quick to hail this as a victory for privacy. ‘Private’ individuals would now be able to ask search engines to remove links to information they considered irrelevant or outmoded. In theory, this sounds appealing. Which one of us would not want to massage the way in which we are represented to the outside world? Certainly, anyone who has had malicious smears spread about them in false articles or embarrassing pictures posted of their teenage exploits, or even criminals whose convictions are spent and have the legal right to rehabilitation. In practice, though, the ruling was far too blunt, far too broad brush, and gave far too much power to the search engines to be effective.

At the time of the ECJ decision, Index warned that the woolly wording of the ruling – its failure to include clear checks and balances, or any form of proper oversight – presented a major risk. Private companies like Google – no matter how broad and noble their advisory board might be on this issue – should not be the final arbiters of what should and should not be available for people to find on the internet. It’s like the government devolving power to librarians to decide what books people can read (based on requests from the public) and then locking those books away. There’s no appeal mechanism, no transparency about how Google and others arrive at decisions about what to remove or not, and very little clarity on what classifies as ‘relevant’. Privacy campaigners argue that the ruling offers a public interest protection element (politicians and celebrities should not be able to request the right to be forgotten, for example), but – again – it is hugely over simplistic to argue that simply by excluding serving politicians and current stars from the request process that the public’s interest will be protected.

We are starting to see some of the (high profile) examples of how the ruling is being applied by Google. The Guardian’s James Ball reported on Wednesday that his newspaper had received an email notification from Google saying six Guardian articles had been scrubbed from search results.

“Three of the articles, dating from 2010, relate to a now-retired Scottish Premier League referee, Dougie McDonald, who was found to have lied about his reasons for granting a penalty in a Celtic v Dundee United match, the backlash to which prompted his resignation,” Ball wrote. “The other disappeared articles are a 2011 piece on French office workers making post-it art, a 2002 piece about a solicitor facing a fraud trial standing for a seat on the Law Society’s ruling body and an index of an entire week of pieces by Guardian media commentator Roy Greenslade.”

Similarly, the BBC was told that the link to a 2007 article by the BBC’s Economics Editor, Robert Peston, had also been removed.

Neither The Guardian nor the BBC has any form of appeal against the decision, nor were the organisations told why the decision was made or who requested the removals. You may argue – as some have done – that Google is deliberately selecting these stories (involving well-known journalists with large online followings) as a kind of non-compliant compliance to prove that the ruling is unworkable. Certainly, a fuller picture of the types of request, and much more detailed information about how decisions are arrived at, is essential. You can also point to the fact that it is easy to find the removed articles simply by going to a search engine’s domain outside Europe.

The fact remains that this ruling is deeply problematic, and needs to be challenged on many fronts. We need policymakers to recognise this flabby ruling needs to be tightened up fast with proper checks and balances – clear guidelines on what can and should be removed (not leaving it to Google and others to define their own standards of ‘relevance’), demands for transparency from search engines on who and how they make decisions, and an appeals process. If search engines really believe this is a poor ruling then they should make a clear stand against it by kicking all right to be forgotten requests to data protection authorities to make decisions. The flood of requests that would be driven to these already stretched national organisations might help to focus minds on how to prevent a ruling intended to protect personal privacy from becoming a blanket invitation to censorship.

This article was posted on 3 July 2014 at indexoncensorship.org

Right to be forgotten: “A blunt instrument ruling that opens the door for widespread censorship”

Commenting on the recent articles removed from search engines by Google, Jodie Ginsberg, CEO of Index on Censorship, said:

“As Index on Censorship warned when the ruling was delivered last month the ‘right to be forgotten’ is a blunt instrument ruling that opens the door for widespread censorship and the whitewashing of the past.

Private companies like Google should not have been handed the power to make decisions – that lack any kind of transparency and accountability – about what information can and cannot be found on the internet.”

Further information:

Index urges court to rethink ruling on “right to be forgotten” (30 May, 2014)

Are search engines the ultimate arbiters of information? (14 May, 2014)

Index blasts EU court ruling on “right to be forgotten” (13 May, 2014)

When Google tripped: Forgetting the right to be forgotten

right-to-be-forgotten-screengrab

On May 13, the Court of Justice of the European Union (CJEU) held in Google Spain v AEPD and Mario Costeja González that there was a “right to be forgotten” in the context of data processing on internet search engines. The case had been brought by a Spanish man, Mario Gonzáles, after his failure to remove an auction notice of his repossessed home from 1998, available on La Vanguardia, a widely-read newspaper website in Catalonia.

The CJEU considered the application of various sections of Article 14 of EU Directive 95/46/EC of the European Parliament and of the Council of October 24, 1995 covering the processing of personal data and the free movement of such data.

A very specific philosophy underlines the directive. For one, it is the belief that data systems are human productions, created by humans for humans.  In the preamble to Article 1 of Directive 95/46, “data processing systems are designed to serve man; … they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms notably the right to privacy, and contribute to … the well-being of individuals.”

Google Spain and Google Inc.’s argument was that such search engines “cannot be regarded as processing the data which appear on third parties’ web pages displayed in the list of search results”.  The information is processed without “effecting the selection between personal data and other information.”  Gonzáles, and several governments, disagreed, arguing that the search engine was the “controller” regarding data processing. The Court accepted the argument.

Attempts to distinguish the entities (Google Inc. and Google Spain) also failed. Google Inc. might well have operated in a third state, but Google Spain operated in a Member State.  To exonerate the former would render Directive 95/46 toothless.

The other side of the coin, and one Google is wanting to stress, is that such a ruling is a gift to the forces of oppression.  A statement from a Google spokesman noted how, “The court’s ruling requires Google to make difficult judgments about an individual’s right to be forgotten and the public’s right to know.”

Google’s Larry Page seemingly confuses the necessity of privacy with the transparency (or opacity) of power.  “It will be used by other governments that aren’t as forward and progressive as Europe to do bad things.  Other people are going to pile on, probably… for reasons most Europeans would find negative.”  Such a view ignores that individuals, not governments, have the right to be forgotten.  His pertinent point lies in how that right might well be interpreted, be it by companies or supervisory authorities. That remains the vast fly in the ointment.

Despite his evident frustrations, Page admitted that Google had misread the EU smoke signals, having been less involved in matters of privacy, and more committed to a near dogmatic stance on total, uninhibited transparency. “That’s one of the things we’ve taken from this, that we’re starting the process of really going an talking to people.”

A sense of proportion is needed here.  The impetus on the part of powerful agencies or entities to make data available is greater in the name of transparency than private individuals who prefer to leave few traces to inquisitive searchers.  Much of this lies in the entrusting of power – those who hold it should be visible; those who have none are entitled to be invisible.  This invariably comes with its implications for the information-hungry generation that Google has tapped into.

The critics, including those charged with advising Google on how best to implement the EU Court ruling, have worries about the routes of accessibility.  Information ethics theorist Luciano Floridi, one such specially charged advisor, argues that the decision spells the end of freely available information.  The decision “raised the bar so high that the old rules of Internet no longer apply.”

For Floridi, the EU Court ruling might actually allow companies to determine the nature of what is accessible.  “People would be screaming if a powerful company suddenly decided what information could be seen by what people, when and where.” Private companies, in other words, had to be the judges of the public interest, an unduly broad vesting of power.  The result, for Floridi, will be a proliferation of  “reputation management companies” engaged in targeting compromising information.

Specialist on data law, Christopher Kuner, suggests that the Court has shown a lack of concern for the territorial application, and implications, of the judgment.  It “fails to take into account the global nature of the internet.”  Wikipedia’s founder, Jimmy Wales, also on Google’s advisory board, has fears that Wikipedia articles are set for the censor’s modifying chop.  “When will a European court demand that Wikipedia censor an article with truthful information because an individual doesn’t like it?”

The Court was by no means oblivious to these concerns.  A “fair balance should be sought in particular between that interest [in having access to information] and the data subject’s fundamental rights under Articles 7 [covering no punishment without law] and 8 [covering privacy] of the Charter.”  Whether there could be a justifiable infringement of the data subject’s right to private information would depend on the public interest in accessing that information, and “the role played by the data subject in private life.”

To that end, Google’s service of removal is only available to European citizens.  Its completeness remains to be tested.  Applicants are entitled to seek removal for such grounds as material that is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed.”

An explanation must accompany the application, including digital copies of photo identification, indicating that ever delicate dance between free access and anonymity.  For Google, as if it were an unusual illness, one has to justify the assertion of anonymity and invisibility on the world’s most powerful search engine.

Others have showed far more enthusiasm. Google’s implemented program received 12,000 submissions in its first day, with about 1,500 coming from the UK alone.  Floridi may well be right – the age of open access is over. The question on who limits that access to information in the context of a search, and what it produces, continues to loom large.  The right to know jousts with the entitlement to be invisible.

This article was published on June 2, 2014 at indexoncensorship.org

Both Google and the European Union are funders of Index on Censorship

 

Index urges court to rethink ruling on “right to be forgotten”

Index reiterates its concern at the ruling on the so-called “right to be forgotten” and its implications for free speech and access to information. Index urges the court to put a stay on its ruling while it pursues a regulatory framework that will provide legal oversight, an appeals process and ensure that private corporations are not the arbiters of public information.

While it is clearly understandable that individuals should want to be able to control their online presence, the court’s ruling fails to offer sufficient checks and balances to ensure that a desire to alter search requests so that they reflect a more “accurate” profile does not simply become a mechanism for censorship and whitewashing of history.

Issued without a clearly defined structure to police the requests, the court ruling has outsourced what should be the responsibility of publicly accountable bodies to private corporations who are under no obligations to protect human rights or act in public interest. Index will be monitoring very closely the processes and procedures used by Google and others to make decisions.

Although Google has devised an advisory committee to support its decision-making, the fact remains that we are in a situation in which search engines will be making decisions about what is deemed “irrelevant and inappropriate” – and a situation that fails to take into account the fact that information deemed “irrelevant” now may become extremely relevant in future.

Index urges the court to go back and reconsider its directions to search engines. It must devise a clear structure for managing requests that balances the public’s right to information, freedom of expression and privacy rights.

For more information call: +44 (0) 207 260 2660

****

Both Google and the European Union are funders of Index on Censorship