Data retention and legality: The fall of the EU’s Data Retention Directive

(Photo illustration: Shutterstock)

(Photo illustration: Shutterstock)

Retaining data is the reflex of a functioning bureaucracy.  What is stored, how it is stored, and when it is disseminated, poses the great trinity of management.  These principles lurk, ostensibly at least, under an umbrella of privacy.  The European Union puts much stake in Article 8 of the European Convention on Human Rights, stressing the values of privacy that covers home, family and correspondence.  But there are also wide qualifications – interferences are warranted in the interest of national security and public safety, allowing Member States, and the EU, a degree of room to gnaw away at privacy rights.

That entitlement to privacy has gradually diminished in favour of the “security” limb of Article 8.  The surveillance narrative is shaping privacy as a necessarily circumscribed right.  The realm of monitoring and surveillance is being extended.  Technologies have proliferated; laws have remained, if not stagnant, then ineffective.

Unfortunately for those occasionally oblivious drafters of rules in Brussels, the judges of the Court of Justice of the EU did not take kindly to the Data Retention Directive, which requires telecommunications and internet providers to retain traffic and location data.  That is not all – the directive itself also retains data identifying the user or subscriber, a true stab against privacy proponents keen on principles of anonymising users.

The objective of the DRD, like so many matters concerned with bureaucratic ordering, is procedural: to harmonise regimes of data retention across various member states.  More specifically, Directive 2006/24/EC of the European Parliament and of the Council of March 15 2006 deals with “retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks”.

Other courts have expressed concern with the directive, which propelled the hearings to the ECJ.  These arose from separate complaints in Ireland and Austria over measures taken by citizens and parties against the authorities.  The Irish case began with a challenge by Digital Rights Ireland in 2006. The Austrian legal challenge was pushed by the Kärntner Landesregierung (Government of the Province of Carinthia) and numerous other concerned parties to annul the local legislation incorporating the directive into Austrian law.

The Constitutional Court of Austria and the High Court of Ireland shook their judicial fingers with rigour against it – the judges were not pleased.  The disquiet continued to their brethren on the ECJ, which proceeded to make its stance on the scope of the retention law clear by declaring it invalid.  EU officials should have seen it coming – in December last year, the Advocate General of the ECJ was already of the opinion that the DRD constituted “a serious interference with the privacy of those individuals” and a “permanent threat throughout the data retention period to the right of citizens of the Union to confidentiality in their private lives.”

The defensive stance taken by the authorities is so old it is gathering dust.  Technology changes, but government rationales never do.  Invariably, it is two pronged. The ever pressing concerns of security forms the first.  The second: that such behaviour does not violate privacy – at least disproportionately. You will find these principles operating in tandem in each defence on the part of authorities keen to justify extensive data retention.  Such intrusive measures have as their object the gathering of information, rather than the gathering of useful data. The usefulness is almost never evaluated as a criterion of extending the law.  Instinct, not evidence, is what counts.

The rationale of the first premise is simple enough: information, or data, is needed to fight the shady forces of crime and terrorism.  Better data retention practices equates to more solid defence against threats to public security.  The ECJ acknowledged the reason as cogent enough – that data retention “genuinely satisfies an objective of general interest, namely, the fight against serious crime and, ultimately, serious security.” The authorities were also keen to emphasise that such a regime of retention was not “such as to adversely affect the essence of the fundamental rights to respect for private life and to the protection of private data.”

In dismissing the main arguments of the authorities, the points of the court are clear.  In retaining the data, it is possible to “know the identity of the person with whom a subscriber or registered user has communicated and by what means”.  Identification of the time of the communication and place form which that communication took place is also possible.  Finally, the “frequency of the communications of the subscriber or registered user with certain persons during a given period” is also disclosed.  Taken as a whole set, these composites provide “very precise information on the private lives of the persons whose data are retained, such as the habits of everyday life, permanent or temporary places of residence, daily or other movements, activities carried out, social relationships and the social environments frequented.”  Former Stasi employees would be swooning.

The judgment provides a relentless battering of a directive that should never left the drafter’s desk. “The Court takes the view that, by requiring retention of those data and by allowing competent national authorities to access those data, the directive interfered in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.”

The laws of privacy tend to focus on specificity and limits.  If there is to be interference, it should be proportionate. The directive had failed at the most vital hurdle – if privacy is to be interfered with, do so in even measure with minimal interference.  The DRD had, in effect “exceeded the limits imposed by compliance with the principle of proportionality.”  The decision is unlikely to kill off regimes of massive data retention – it will simply have to make those favouring surveillance over privacy more cunning.

This article was posted on April 9, 2014 at indexoncensorship.org

We need a privacy rapporteur, says UN free speech boss

(Image: Mahmoud Illean/Demotix)

(Image: Mahmoud Illean/Demotix)

There should be a special United Nations mandate for protecting the right to privacy, says the Frank La Rue, the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. “I  believe that privacy is such a clear and distinct right…that it would merit to have a rapporteur on its own.”

While he pointed out there is some opposition to creating new mandates on economic grounds, he said: “In general If you would ask me, I would say yes, this right deserves a [UN] mandate.” He also called for a coordinated effort from the UN human rights system to deal with the issue of privacy.

The comments came during an expert seminar in Geneva Monday on “The Right to Privacy in the Digital Age” in the aftermath of the mass surveillance revelations.

La Rue said that the right to privacy has not been given enough attention in the past, calling it equal, interrelated and interdependent to other human rights. In particular, he spoke of its connection to freedom of expression and how having or not having privacy can affect freedom of expression.

“Privacy and freedom of expression are not only linked, but are also facilitators of citizen participation, the right to free press, exercise of free opinion, and the possibility of gathering individuals, exercising the right to free association and to be able to criticise public policies.”

He also warned against trying to protect national security at the expense of democracy and human rights, saying: “If we pitch one against the other…I think we’ll end up losing both.”

This echoes the sentiments of his report released in June 2013, which concluded that: “States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy.”

This article was posted on 25 February 2014 at indexoncensorship.org

Is India’s biometric benefits database trampling privacy?

shutterstock_biometric_india_173637824

(Image: Sergey Nivens/Shutterstock)

In 2009 India announced its grand universal biometric scheme “Aadhaar”. The scheme, managed by the Unique Identification Authority of India (UIDAI), collects the fingerprints, iris scans and facial images of applicants in exchange for a national identification number. First handed out in 2010 the numbers, randomised 12-digit codes, function as “internal passports” which can be used as proof of identity to access state services.

November 2013 marked 500 million enrolments to the scheme, making Aadhaar the largest biometric programme in the world. This year the scheme is set to be linked to major development reforms, and the collection of data, stored in a centrally controlled database, aims to improve transparency, reduce corruption and ensure access to the country’s myriad of welfare benefits.

India’s welfare state is characterised by “leakage”: by corrupt middlemen syphoning off benefits and claimants taking more than their share. The biometric scheme plays an important role in making sure that those who are entitled state aid receive it. But despite this developmental progress India lacks comprehensive protections for biometric data, raising serious concerns about individual privacy.

A report by Oxford Pro Bono Publico, a research centre affiliated to the University of Oxford, found India’s controls over the collection, storage and use of biometric data, compared to other jurisdictions, hugely deficient.

The sheer scale of the project compounds concerns, with UIDAI aiming to enrol every one of India’s 1.2 billion people.  The scheme was first introduced as voluntary, but as more and more development schemes are administered through it, welfare recipients seeking state aid have little choice but to hand over their data.

Justice Puttaswamy, a retired High Court judge, has led the charge in challenging the scheme on privacy grounds. As he argued in his petition to the Indian Supreme Court, “there are no safeguards or penalties and no legislative backing for obtaining personal information”. His complaint culminated in a Supreme Court interim order, which insisted that the scheme must remain voluntary and that those entitled to receive welfare should do so regardless of their Aadhaar status.

Attempts to circumvent the Aadhaar programme to deliver benefits, however, have become increasingly difficult. Last year, despite the Supreme Court order, reports emerged from Delhi that food-subsidy ration cards were only being handed out to those with national identification numbers. A recent announcement by the Minister for Food and Civil Supplies, that consumers without Aadhaar cards would continue to receive discounted cooking gas, provoked oil companies and the Union Ministry of Petroleum and Natural Gas to return to the Supreme Court to file an appeal.

Aadhaar was introduced via an executive order, a lack of statutory backing that critics argue makes the scheme unconstitutional. As Shyam Divan, a practising lawyer and petitioner in a case against the UIDAI, explains, there is no legislative oversight of the collection, storage and use of biometric data. Controls on access are similarly scant. There are no provisions that address who can access the data, when and why. At the field level, agents enrolling applicants to the scheme are employed privately and work without government supervision. Once collected, the data passes through private hands before being transferred to the UIDAI’s central repository. Corporations (including the consulting firm Accenture, tech-solutions firm Morpho and American defence contractor L-1 Identity) are involved at every stage of the operation, a sprawling collection and transmission network that campaigners fear maximises the opportunity for abuse.

The case against the scheme on constitutional grounds is equally robust. Every time a person uses their unique Aadhaar number, a real-time confirmation is sent between the access point and central database, a process that activists complain amounts to covert surveillance. Critics argue that this tracking violates the right to privacy enshrined in Article 21 of Indian Constitution. According to campaigners insufficient information on the data-collection process also amounts to a lack of informed consent, a further rights violation.

Through public interest litigation various groups have taken the UIDAI to court over the lack of statutory backing and inadequate data protection, suits that the state has dismissed as “frivolous, misleading and legally incorrect” attempts at derailing a “project that aims to promote inclusion and benefit marginalized sections of society”.

The size and inefficiency of India’s welfare state imposes enormous pressures on officials to improve service delivery. The scheme’s defenders invoke a democratic justification, arguing the government has a responsibility to ensure that welfare spending reaches those that are most in need.

Nandan Nilekani, chairperson of the UIDAI, has admitted that he may not have done enough to persuade people of the benefits of the scheme.  But as Justice Puttaswamy insists “the way the government has gone about implementing this project is odd and illegal,” and questions about privacy still loom large.

This article was posted on 31 January 2014 at indexoncensorship.org

Online privacy as an active pursuit

Illustration: Shutterstock

Illustration: Shutterstock

I had arranged to meet ‘Emma’ in a cafe, at the behest of a mutual friend. As a student of forensic computing informatics, I was asked to help educate Emma about online privacy, a particular passion of mine.

Emma is an unassuming 24-year-old. Nothing about her physical appearance  or mannerisms would divulge anything of the abuse she was subjected to at the hands of a former boyfriend.

She explained that she had experienced on-going violence while in a three-year relationship. Her partner had physically, verbally and emotionally abused her. In addition, her former boyfriend monitored and restricted her access to the Internet.

“I didn’t have anything private. I couldn’t do anything without him asking something about my behaviour, or my intentions, or whatever else I was doing. It was physical and psychological entrapment at its worst.” she said.

For Emma, our meeting was about learning to use tools to take control of her privacy in an age of mass monitoring. She was taking back the capabilities that were torn from her by her abusive boyfriend, by becoming empowered to protect herself in the on-line sphere.

During our conversation, I shared various techniques and tools she could use to browse sites anonymously, and I explained the concepts and principles of privacy-enhancing technologies – software including Tor and I2P (Invisible Internet Project), which would enable her to protect her identity.

Unsurprisingly, Emma has become extremely protective of her access to the Internet; a residual scar of the control she found herself being subjected to.

I was mindful also that it was the first opportunity I’d had which humanised a subject I’m deeply passionate about – often ascribed crypto-anarchism – to effect meaningful, beneficial change to someone else’s life. It also acted as a sobering realisation of the technological capabilities and opportunities available to ordinary citizens to thwart mass surveillance perpetrated by the National Security Agency, GCHQ and their ilk.

Technology has shifted traditional notions of personal privacy in unforeseen ways. We’ve entered a new world order, in which tools of oppression and exploitation are often pointed inward by a state acting as the abusive partner.

I couldn’t help but be reminded of Duncan Campbell’s Secret Society episode We’re All Data New: Secret Data Banks, broadcast 26 years ago. It detailed swathes of information being held on the entire populace of Britain in private sector databases; specifically I recalled the horror on the faces of unwitting participants, as Campbell accessed sensitive personal information from a computer terminal with minimal effort.

Councils sell copies of that data for a pittance nowadays; it’s the electoral register. In comparison to today’s data brokers, behemoth custodians of in-depth data held about each and every one of us, the private databases of yesteryear seem almost quaint. Surveillance is as ubiquitous as ever, and so pervasive that it is has merged into an almost indecipherable cacophony from data-mining business models to gluttonous mass surveillance by the government and its agents. Each has a thread in common, a fundamental component.

You.

Or, of most concern to me during our meeting, Emma.

Often the stakes and associated risk of using modern technologies are magnified considerably for those suffering from physical harm, psychological abuse or harassment. This is especially true in those fearful of seeking information or resources in genuine confidence – a capability cherished by those in such strained circumstances. As I listened to her experiences, I grasped from Emma’s tone she was afraid of exposing herself to potential further abuse.

“In some countries,” I told her, “I’d be considered dangerous. The skills I’m teaching you wouldn’t be tolerated, much less encouraged.”

In the aftermath of Edward Snowden’s disclosures, spearheaded by Glen Greenwald, Laura Poitras and The Guardian, of the activities of the United States’ NSA and United Kingdom’s GCHQ, it has become imperative that the sociological impacts of surveillance be recognised and addressed directly, if societies are to protect each and every one of its participants from such endemic spying.

But, too often, the insipid encroachment is interpreted solely as a technological problem, by which it is assumed surveillance must be countered wholly by the same. While technology is a component of a solution, it cannot derail the potential for abuse on its own.

Ultimately, the answer to surveillance on a personal or societal level demands a radical overhaul of attitudes and perceptions. People must share information, techniques and tools to help one-another protect their civil liberties. People must encourage each other to cherish their online and offline privacy. Technological mutual solidarity if you will. Ecosystems and privacy-enhancing technologies such as Tor and I2P, amongst a plethora of others, cater for this exact idealism; privacy by design, rather than by public policy.

Because it isn’t just about personal privacy anymore; nor was it, in fact, ever. It too is about dignity, morality and using technology as a vehicle to emancipate, to facilitate, and to embellish an underlying respect for individuals as citizens, and – especially in Emma’s circumstance – their sanctity as human beings.

Actively manage YOUR online privacy

Tor Project

Privacy-enhancing technology ecosystem, which enables users to communicate and browse anonymously, and circumvent internet censorship by routing traffic via intermediate nodes before transmitted to the intended site; prevents third parties from discovering a user’s location or their browsing habits.

I2P

Software that has similar capabilities to Tor in permitting anonymous or pseudonymous browsing. Can be used as standalone or in conjunction with other pieces of software to enhance a user’s ability to ensure communications remain as confidential as possible. Also contains web-based email among other features within its operating environment, which is accessible only via I2P itself.

TAILS

A Live CD-based operating system, comprising of an entire operating environment, and contains both the aforementioned tools and additional software without disclosing evidence of its use, as it is self-contained on a DVD-R and operates in RAM, erasing evidence of its presence once a machine is switched off.

Off-the-Record Messaging

An instant messenger plug-in, and cryptographic protocol which is used to create secure instant messaging sessions between users in such a manner that conversations are plausibly deniable and affords for confidential, private communication between participants.

This article was published on 24 January 2014 at indexoncensorship.org