When Google tripped: Forgetting the right to be forgotten

right-to-be-forgotten-screengrab

On May 13, the Court of Justice of the European Union (CJEU) held in Google Spain v AEPD and Mario Costeja González that there was a “right to be forgotten” in the context of data processing on internet search engines. The case had been brought by a Spanish man, Mario Gonzáles, after his failure to remove an auction notice of his repossessed home from 1998, available on La Vanguardia, a widely-read newspaper website in Catalonia.

The CJEU considered the application of various sections of Article 14 of EU Directive 95/46/EC of the European Parliament and of the Council of October 24, 1995 covering the processing of personal data and the free movement of such data.

A very specific philosophy underlines the directive. For one, it is the belief that data systems are human productions, created by humans for humans.  In the preamble to Article 1 of Directive 95/46, “data processing systems are designed to serve man; … they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms notably the right to privacy, and contribute to … the well-being of individuals.”

Google Spain and Google Inc.’s argument was that such search engines “cannot be regarded as processing the data which appear on third parties’ web pages displayed in the list of search results”.  The information is processed without “effecting the selection between personal data and other information.”  Gonzáles, and several governments, disagreed, arguing that the search engine was the “controller” regarding data processing. The Court accepted the argument.

Attempts to distinguish the entities (Google Inc. and Google Spain) also failed. Google Inc. might well have operated in a third state, but Google Spain operated in a Member State.  To exonerate the former would render Directive 95/46 toothless.

The other side of the coin, and one Google is wanting to stress, is that such a ruling is a gift to the forces of oppression.  A statement from a Google spokesman noted how, “The court’s ruling requires Google to make difficult judgments about an individual’s right to be forgotten and the public’s right to know.”

Google’s Larry Page seemingly confuses the necessity of privacy with the transparency (or opacity) of power.  “It will be used by other governments that aren’t as forward and progressive as Europe to do bad things.  Other people are going to pile on, probably… for reasons most Europeans would find negative.”  Such a view ignores that individuals, not governments, have the right to be forgotten.  His pertinent point lies in how that right might well be interpreted, be it by companies or supervisory authorities. That remains the vast fly in the ointment.

Despite his evident frustrations, Page admitted that Google had misread the EU smoke signals, having been less involved in matters of privacy, and more committed to a near dogmatic stance on total, uninhibited transparency. “That’s one of the things we’ve taken from this, that we’re starting the process of really going an talking to people.”

A sense of proportion is needed here.  The impetus on the part of powerful agencies or entities to make data available is greater in the name of transparency than private individuals who prefer to leave few traces to inquisitive searchers.  Much of this lies in the entrusting of power – those who hold it should be visible; those who have none are entitled to be invisible.  This invariably comes with its implications for the information-hungry generation that Google has tapped into.

The critics, including those charged with advising Google on how best to implement the EU Court ruling, have worries about the routes of accessibility.  Information ethics theorist Luciano Floridi, one such specially charged advisor, argues that the decision spells the end of freely available information.  The decision “raised the bar so high that the old rules of Internet no longer apply.”

For Floridi, the EU Court ruling might actually allow companies to determine the nature of what is accessible.  “People would be screaming if a powerful company suddenly decided what information could be seen by what people, when and where.” Private companies, in other words, had to be the judges of the public interest, an unduly broad vesting of power.  The result, for Floridi, will be a proliferation of  “reputation management companies” engaged in targeting compromising information.

Specialist on data law, Christopher Kuner, suggests that the Court has shown a lack of concern for the territorial application, and implications, of the judgment.  It “fails to take into account the global nature of the internet.”  Wikipedia’s founder, Jimmy Wales, also on Google’s advisory board, has fears that Wikipedia articles are set for the censor’s modifying chop.  “When will a European court demand that Wikipedia censor an article with truthful information because an individual doesn’t like it?”

The Court was by no means oblivious to these concerns.  A “fair balance should be sought in particular between that interest [in having access to information] and the data subject’s fundamental rights under Articles 7 [covering no punishment without law] and 8 [covering privacy] of the Charter.”  Whether there could be a justifiable infringement of the data subject’s right to private information would depend on the public interest in accessing that information, and “the role played by the data subject in private life.”

To that end, Google’s service of removal is only available to European citizens.  Its completeness remains to be tested.  Applicants are entitled to seek removal for such grounds as material that is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed.”

An explanation must accompany the application, including digital copies of photo identification, indicating that ever delicate dance between free access and anonymity.  For Google, as if it were an unusual illness, one has to justify the assertion of anonymity and invisibility on the world’s most powerful search engine.

Others have showed far more enthusiasm. Google’s implemented program received 12,000 submissions in its first day, with about 1,500 coming from the UK alone.  Floridi may well be right – the age of open access is over. The question on who limits that access to information in the context of a search, and what it produces, continues to loom large.  The right to know jousts with the entitlement to be invisible.

This article was published on June 2, 2014 at indexoncensorship.org

Both Google and the European Union are funders of Index on Censorship

 

Index urges court to rethink ruling on “right to be forgotten”

Index reiterates its concern at the ruling on the so-called “right to be forgotten” and its implications for free speech and access to information. Index urges the court to put a stay on its ruling while it pursues a regulatory framework that will provide legal oversight, an appeals process and ensure that private corporations are not the arbiters of public information.

While it is clearly understandable that individuals should want to be able to control their online presence, the court’s ruling fails to offer sufficient checks and balances to ensure that a desire to alter search requests so that they reflect a more “accurate” profile does not simply become a mechanism for censorship and whitewashing of history.

Issued without a clearly defined structure to police the requests, the court ruling has outsourced what should be the responsibility of publicly accountable bodies to private corporations who are under no obligations to protect human rights or act in public interest. Index will be monitoring very closely the processes and procedures used by Google and others to make decisions.

Although Google has devised an advisory committee to support its decision-making, the fact remains that we are in a situation in which search engines will be making decisions about what is deemed “irrelevant and inappropriate” – and a situation that fails to take into account the fact that information deemed “irrelevant” now may become extremely relevant in future.

Index urges the court to go back and reconsider its directions to search engines. It must devise a clear structure for managing requests that balances the public’s right to information, freedom of expression and privacy rights.

For more information call: +44 (0) 207 260 2660

****

Both Google and the European Union are funders of Index on Censorship

 

 

Counterpoint: Your personality is your castle

Graham Ginsberg shows how he feels about his search engine profile. (Photo: Graham Ginsberg)

Graham Ginsberg shows how he feels about his search engine profile. (Photo courtesy Graham Ginsberg)

The internet is so much more significant than a newspaper article. It’s bigger than print in its longevity and reach and it’s forever growing, shaping the public lives for all generations, past, present and future.

The handling of this information has become exponentially important. The quote “With great power comes great responsibility” comes to mind. But where is the responsibility when it comes to showing our personalities, our castle, in search engines?

When search engines choose to show information about me, as an example, do they show all available information about me or do they choose certain articles and pictures they consider most relevant and fresh to show the public? And why is there no redress available to me to deal with how search engines portray me?

I recently submitted a complaint to three major search engines requesting that they remove certain pictures and references to articles about me that were old and irrelevant.

Google didn’t respond back, but Bing’s Technical Support did saying, “Thank you for contacting Bing Technical Support regarding your request to remove content from the Bing search engine. Working directly with the site owner or webmaster for removal of the content is the best way to resolve your issue. Bing doesn’t control the operation or design of the websites we index. We also don’t control what these websites publish. As long as the website continues to make the information available on the web, the information will be generally available to others through Bing or other search services.”

But this isn’t entirely true on several levels. But it’s their boilerplate response back, kind of a “it’s not our fault” statement.

Search engines like Bing, Google and Yahoo, do limit and restrict information they show in search results by using software that prioritizes and sorts data into a format it deems suitable.

What is censorship?

Censorship is the act or censoring, the removal or suppression of what is considered morally, politically, or otherwise objectionable. And this is precisely what search engines do right now in their own way using customised algorithms.

Bing suggested that I contact the webmaster in the hope that they would remove the information from being indexed on the internet. Bing suggests that they remove a story as if it never existed. But that isn’t my gripe. I have no problem that our local daily newspaper has pictures and an article about me protesting in front of their establishment. They have every right to have it and I don’t want it removed from their website.

But I have an issue with Bing for showing the information as if it was the only information there was on me. And there is a ton of information about me; from articles I had written to the local paper on a whole bunch of different subjects from national beach access issues to real estate, but little if any is shown in the search results. Just me standing in front of the paper protesting with a noose around my neck, almost ten years ago. Maybe the algorithms looks for keywords like ‘noose’ and prioritizes them higher than say and article about NBA Hall of Famer Larry Bird and his house, which I contributed last year in the same paper.

Because search engines are moneymaking machines, any customized filtering of search results will cost them dearly. And why not, they’re businesses like any other and should be held accountable for their product they’re selling. But what makes search engines different from any other business is they’re so big and powerful and that is why governments need to have means to force them into compliance.

It’s the Wild West all over again. Asking search engines like Google and Bing to police themselves, to be fair and moral has proven to be futile and why should they care or even act accordingly?

To keep the peace for the common good, laws need to have teeth to force offending search engines to comply with logical guidelines that protect the interests of the public and not just interests of these large search engine corporations.

This article was posted on 22 May 2014 at indexoncensorship.org

Counterpoint: “Right to be forgotten” is the step in the right direction

right-to-be-forgotten-screengrab

Enshrining the right to be forgotten is a further step towards allowing individuals to take control of their own data, or even monetise it themselves, as we proposed in the Project 2020 white paper (Scenarios for the Future of Cybercrime). The way the law stands in the EU currently, we have legal definitions for a data controller, a data processor and a data subject, an oddity, which lands each of us in the bizarre situation where we are subjects of our own data rather being able to assert any notion of ownership over it. With data ownership comes the right to grant or deny access to that data and to be responsible for its accuracy and integrity.

In response to the ECJ judgement, I have seen a lot of commentators cry “censorship” and make all kinds of unsupportable comparisons with book burning (or pulping), these reactions are misguided and out of all proportion to the decision made. Let’s remember what has been decreed is that an individual has the right to request that certain information be de-indexed from search and aggregation engines. That request is not an order and each one must go through due process and consideration before any changes are made, including if necessary consideration by a court of law. Individuals are not being granted the right to rewrite history, they are being given the right to request, within the strictures of the law, that certain publishers cease to publish information about them which they consider deleterious. They are being given the right to be able to manage their own image online, it seems bizarre that this right is seen by some as the repression of free speech when in effect it gives the individual the right to speak up about something which they find personally damaging.

In 2009, an organisation called “The Consulting Association” was found to be operating a commercial blacklist service to the construction industry. This organisation held detailed files on construction professionals, listing their names, family relationships, newspaper cuttings and details of criminal records. Several global construction companies paid for access to this data and over 3000 individuals were potentially prevented from gaining employment in their industry. Of course this shocks us, and rightly the Information Commissioner took action, seizing the data in question and informing those affected. In many ways a search engine’s constant aggregation of data and even more its contextualisation and publication of that data as relevant to a given name fulfils the same function, now you have a right to at least influence it, even if you cannot stop it.

The ruling is the right one. The court recognises that information that was “legally published” remains so and that the individual has no right to censor it. However, they also recognise that search engines collect, retrieve, record, organise, store and disclose information on an on-going basis and that this constitutes “processing” of data under the EU directive. Further, given that the search engine determines the means and purpose of their own data processing, they are also a “Data Controller” under that directive and again must fulfil the legal requirements of such an entity; any other court decision would weaken that whole directive beyond repair. The entirety of information turned up in response to a search on a person’s name, represents a whole new level of publishing and the discrete items of information would have been very difficult, if not impossible, to put together in the absence of a search engine.

While there will of course be technical and procedural issues that arise from this ruling and there will doubtless be individuals seeking to evade public scrutiny, any other decision on this would have simply blown away the EU Data Protection directive and that is not something any us should be advocating. Consider the wider ramifications of this decision, if a search engine is now a “Data Controller” in the eyes of the law, shouldn’t they be notifying us whenever they collect information about us? Would it be a breath of fresh air if you could begin to understand the wealth of information out there about you and begin to realise an income from it yourself? Personal information is a commodity that commands a financial premium and right now it is others who realise those gains. It’s time we advocated for real ownership of our own data.

Before personal data became a commodity mined by corporations and attackers alike, the need for a legal stance on the identity of the “owner” of data relating to oneself may have seemed laughable. However that has landed us in the situation of today when entities that mine and monetise that same data can refer to this very welcome EU ruling as “disappointing”. Commercially disappointing it may be, however it is a step, albeit a small one, in the right direction.

This article was originally posted on May 13, 2014 at countermeasures.trendmicro.eu