Breaking end-to-end encryption would be a disaster

In August 2021, when the Taliban took over Kabul and home searches became ubiquitous, women started to delete anything they thought could get them in trouble. Books were burned, qualifications were shredded, laptops were smashed. But for 21 members of a women’s creative writing group, a lifeline remained: their WhatsApp group. Over the next year they would use this forum to share news with one another (a story that has since been chronicled in the recently published book My Dear Kabul). Doing so through WhatsApp was not incidental. Instead the app’s use of end-to-end-encryption provided a strong level of protection. The only way the Taliban would know what they were saying was if they found their phones, seized them, forced them to hand over passwords and went into their accounts. They could not otherwise read their messages.

End-to-end encryption is not sexy. Nor do those four words sound especially interesting. It’s easy to switch off when a conversation about it starts. But as this anecdote shows it’s vitally important. Another story we recently heard, also from Afghanistan: a man hid from the Taliban in a cave and used WhatsApp to call for help. Through it, safe passage to Pakistan was arranged.

It’s not just in Afghanistan where end-to-end encryption is essential. At Index we wouldn’t be able to do our work without it. We use encrypted apps to message between our UK-based staff and to keep in touch with our network of correspondents around the world, from Iran to Hong Kong. We use it to keep ourselves safe and we use it to keep others safe. Our responsibility for them is made manifest by our commitment to keep our communication and their data secure.

Beyond these safety concerns we know end-to-end encryption is important for other reasons: It’s important because we share many personal details online, from who we are dating and who we vote for to when our passport expires, what our bank details are and even our online passwords. In the wrong hands these details are very damaging. It’s important too because privacy is essential both in its own right and as a guarantor of our other fundamental freedoms. Our online messages shouldn’t be open to all, much as our phone lines shouldn’t be tapped. Human rights defenders, journalists, activists and MPs message via platforms like Signal and WhatsApp for their work, as do people more broadly who are unsettled by the principle of not having privacy.

Fortunately, today accessible, affordable and easy-to-use encryption is everywhere. The problem is its future looks uncertain.

Last October, the Online Safety Act was passed in the UK, a sprawling piece of legislation that puts the onus on social media firms and search engines to protect children from harmful content online. It’s due to come into force in the second half of 2025. In it, Section 121 gives Ofcom powers to require technology companies to “use accredited technology” that could undermine encryption. At the time of the Act’s passage, the government made assurances this would not happen but comments from senior political figures like Sadiq Khan, who believe amendments to the acts are needed, have done little to reassure people.

It’s not just UK politicians who are calling for a “back door”.

“Until recently, traditional phone tapping gave us information about serious crime and terrorism. Today, people use Telegram, WhatsApp, Signal, Facebook, etc. (…) These are encrypted messaging systems (…) We need to be able to negotiate what you call a ‘back door’ with these companies. We need to be able to say, ‘Mr. Whatsapp, Mr. Telegram, I suspect that Mr. X may be about to do something, give me his conversations,’” said French Interior Minister Gérald Darmanin last year.

Over the last few years police across Europe, led by French, Belgium and Dutch forces, have breached the encryption of users on Sky ECC and EncroChat too. Many criminals were arrested on the back of these hacking operations, which were hailed a success by law enforcement. That may be the case. It’s just that people who were not involved in any criminal activity would also have had their messages intercepted. While on those occasions public outcry was muted, it won’t be if more commonly used tools such as WhatsApp or Signal are made vulnerable.

Back to the UK, if encryption is broken it would be a disaster. Not only would companies like Signal leave our shores, other nations would likely follow suit.

For this reason we’re pleased to announce the launch of a new Index campaign highlighting why encryption is crucial. WhatsApp, the messaging app, have kindly given us a grant to support the work. As with any grant, the grantee has no influence over our policy positions or our work (and we will continue to report critically on Meta, WhatsApp’s parent company, as we would any other entity).

We’re excited to get stuck into the work. We’ll be talking to MPs, lawyers, people at Ofcom and others both inside and outside the UK. With a new raft of MPs here and with conversations about social media very much in the spotlight everywhere it’s a crucial moment to make the case for encryption loud and clear, both publicly and, if we so chose, in a private, encrypted forum.

Myanmar’s growing doxxing problem

More than two years ago, as Myanmar’s coup unfolded, open-source content provided unique insight into what was happening in the country and the battlelines that were soon to emerge. Live from a roundabout in the capital of Naypyidaw, exercise instructor Khing Hnin Wai unwittingly captured and disseminated live footage of the coup taking place via Facebook. For a brief period, images of Khing Hnin Wai dancing in front of a military convoy became symbolic of Myanmar’s struggle to maintain democracy.

Here at Myanmar Witness, we use user-generated, openly available content like this to identify, verify and report on events across Myanmar involving abuses of human rights and contraventions of international law. We let the evidence speak for itself when we publish the results of our investigations, collaborate with media and share evidence with justice and accountability mechanisms.

Content we examine is rarely as innocuous as Khing Hnin Wai‘s video. Since our inception as one of the witness projects at the Centre for Information Resilience, we have used imagery from social media, geospatial providers, and other forms of ‘open’ sources to contribute towards accountability for crimes being committed. These include horrific beheadings, the widespread intentional use of fire, the impact of the conflict on sites with special protections, and at a scale and sophistication beyond what we see in our other witness projects — hate speech and doxxing.

Doxxing exposes the private information of individuals, such as addresses, phone numbers and more, without their consent. In Myanmar it is done with the intent to intimidate, spread fear and suppress voices. Doxxing has become the digital manifestation of the real-world violence faced by thousands of people in Myanmar everyday. Our findings have repeatedly shown that in Myanmar, the internet is being used as a weapon – and this is steeped in history. Facebook was widely used as a vehicle for the promotion of hate speech and incitement to violence during the Rohingya crisis, which led to the social media company admitting failings in the way it handled content on its platform.

In January this year, following an investigation into online abuse against Burmese women, we released our Digital Battlegrounds report, which showed how the situation is worsening. Its findings were damning: Facebook and Telegram were hosting politically-motivated abuse targeted at Burmese women. Abuse included real-world threats of violence, gendered hate speech and sexually violent commentary. The source of this content was clear – pro-Myanmar Military accounts and users.

To their credit, and in response to Myanmar Witness and BBC outreach, both Meta and Telegram removed a large amount of content which violated their respective terms of service. However, in the case of Telegram, soon after some accounts were removed or suspended, new ones emerged to take their place. Identifying online abusers and their violent content continues to be painstaking and tedious work.

The online information environment in Myanmar has been, and continues to be, part of the conflict. In the wake of an airstrike by the Myanmar Air Force against Pa Zi Gyi village in April 2023, the darkness of Myanmar’s digital conflict resurfaced. With some media reporting more than 160 dead it was one of the worst airstrikes seen in Myanmar and led to an outpouring of domestic and international sympathy and condemnation.

In Myanmar, a ‘black profile’ campaign emerged online, mourning the victims of the attack. Today’s report by Myanmar Witness investigators shows just how the military regime retaliated with a brutal crackdown — online and offline — against those who dared to show sympathy. For engaging in non-violent online protest, individuals were met with arrests, threats and physical violence. Both their digital and real-world voices were silenced.

Pro-junta groups doxxed those who protested digitally as online sympathy grew in the wake of the airstrike. We found a link: at least 11 of the 20 individuals who were doxxed were then arrested for their activities on Facebook within days of being exposed by pro-junta Telegram channels. They were among a total of 69 people who were arrested within three weeks of the airstrike. In the vast majority of cases, social media activity was the stated reason for their arrest by the authorities.

Some months following their arrest, five individuals who were influential and well-known — a former journalist and several celebrities — were released. Multiple pro-junta Telegram channels hinted at their release before it occurred, indicating information sharing, if not coordination, between these channels and the military authorities. The fate of the more than 60 others detained in the same period remains unclear. Our research only scratches the surface of the vicious digital and physical conflict in Myanmar, and there are no signs of it abating.

While those who incite and intimidate online are ultimately responsible, inadequate moderation of content by social media platforms is part of the problem, as is the protracted war in Myanmar which recycles and reinforces the online violence. While others go online to perpetuate conflict, we at Myanmar Witness will continue to use digital content to identify, verify and report on the conflict, and to ensure that those at risk of being silenced have their voices heard.

An insidious and unlegislated form of policing?

On a housing estate, somewhere in north-west London, a dispute said to be between rival groups of young men, apparently rages on. From this quagmire of social deprivation emerges Chinx (OS) who, released from an eight-year custodial sentence at the four-year mark, starts dropping bars like his very life depended on it. And, in a way it does. Because for boys like Chinx, young, black and poor, there is only one way out and that is to become the next Stormzy. Only, two behemoths stand in his way: the Metropolitan Police and their apparent “side man” Meta, parent company of Facebook and Instagram.

In January 2022, Chinx posted a video clip of a drill music track called Secrets Not Safe. Following a request by the Metropolitan Police arguing that the post could lead to retaliatory gang-based violence , Meta removed the post and Chinx’s Instagram account was deleted.

Meta’s decision has now been challenged by the Oversight Board, a quasi-independent adjudicator conceived to police the online giant’s application of its own policies but funded by the company.

The Board recently condemned the company’s decision to remove Chinx’s post and delete his account as not complying with Meta’s own stated values and with wider human rights considerations.

As part of its review of Meta’s decision, the Board made a Freedom of Information Act request to the Met over its requests to remove content from various online platforms. Whilst a good proportion of their responses to the request were unhelpful bordering on obstructive, what it did disclose was troubling.

In the year to the end of May 2022, the Met asked online platforms, including Meta, to remove 286 pieces of content. Every single one of those requests related to drill music. No other music genre was represented. Some 255 of the Met’s requests resulted in the removal of content, a success rate of over 90%.

The decision makes for illuminating, if worrying, reading when one considers the potential chilling impact Meta’s actions may have on the freedom of expression of an already suppressed, marginalised and some would argue, over-policed section of our community. Four areas of concern emerge.

Law enforcement access to online platforms

Instagram, in common with other applications, has reporting tools available to all users to make complaints. Whilst it may be that law enforcement organisations use such tools, these organisations also have at their disposal what amounts to direct access to these online platform’s internal complaints procedures. When law enforcement makes a request to take content down, Meta deals with such a request “at escalation”. This triggers a process of investigation by Meta’s internal specialist teams who investigate the complaint. Investigation includes analysis of the content by Meta to decipher whether there is a “veiled threat”.

This case demonstrates a worrying pattern in my view; namely the level of privileged access that law enforcement has to Meta’s internal enforcement teams, as evidenced by correspondence the Board saw in this case.

Lack of evidence

What became clear during the exposition of facts by the Board was that despite the apparent need for a causal link between the impugned content and any alleged “veiled threat” or “threat of violence” law enforcement advanced no evidence in support of their complaint. In the light of the fact, as all parties appeared to accept, that this content itself was not unlawful, this is shocking.

On the face of it then, Meta has a system allowing for fast-tracked, direct access to their complaints procedure which may result in the removal of content, without any cogent evidence to support a claim that the content would lead to real life violence or the threat thereof.

This omission is particularly stark as, as in this case, the violence alluded to in the lyrics took place approximately five years prior to the uploading of the clip. This five-year gap, as the Board commented, made it all the more important for real and cogent evidence to be cited in support of removal of the content. We ought to remind ourselves here that the Board found that in this case there was no evidence of a threat, veiled or otherwise, of real-life violence.

Lack of appeal

Meta’s internal systems dictate that if a complaint is taken “at escalation” – as all government requests to take down content are, and this includes requests made by the Met Police –  this means there is no internal right of appeal for the user. Chinx (OS) and the other accounts affected by this decision had no right to appeal the decision with Meta nor with the Oversight Board. The result is that a decision that, in some cases, may result in the loss of an income stream as well as an erosion of the right to express oneself freely, may go unchallenged by the user. In fact, as Chinx (OS) revealed during an interview with BBC Radio 4’s World at One programme, he was not made aware at any point during the process why his account had been deleted and the content removed.

The Board itself commented that: “The way this relationship works for escalation-only policies, as in this case, brings into question Meta’s ability to independently assess government actors’ conclusions that lack detailed evidence.”

Disproportionality

Each of the three shortcomings above revealed by the Board within Meta’s procedures are worrying enough; but, coupled with the disproportionate impact this system has upon black males (the main authors and consumers of this content) it veers dangerously close to systemic racism.

The findings of the Oversight Board’s FOI request on the Met’s activities in relation to online platforms clearly back this up.

The Digital Rights Foundation argues that while some portray drill music as a rallying call for gang violence, it in fact serves as a medium for youth, in particular black and brown youth, to express their discontent with a system that perpetuates discrimination and exclusion.

An insidious and backdoor form of policing

The cumulative effect of Meta’s actions arguably amounts to an insidious and unlegislated form of policing. Without the glare of public scrutiny, with no transparency and no tribunal to test or comment on the lack of evidence, the Met have succeeded in securing punishment (removal of content could be argued to be a punishment given that it may lead to loss of income) through the back door against content that was not, in and of itself unlawful.

As the Board pointed out in their decision, for individuals in minority or marginalised groups, the risk of cultural bias against their content is especially acute. Art, the Board noted, is a particularly important and powerful expression of “voice”, especially for people from marginalised groups creating art informed by their experiences. Drill music offers young people, and particularly young black people, a means of creative expression. As the UN Special Rapporteur in the field of cultural rights has stated, “…representations of the real must not be confused with the real… Hence, artists should be able to explore the darker side of humanity, and to represent crimes… without being accused of promoting these.”

The right to express yourself freely, even if what you say may offend sections of our community, is one of those areas that truly tests our commitment to this human right.

Why we need to protect end-to-end encryption

For over fifty years, Index on Censorship has supported dissidents, journalists and activists in part by training them on the most current technology. In recent years that has included how to use encryption and encrypted communication apps, helping them to protect themselves from repressive regimes in the easiest and most comprehensive ways possible. This training was especially necessary when accessing encryption proved to be a specialist pursuit, involving intensive training, helping people on the ground to understand the options and downloading often complex peer-to-peer messaging apps.

Now, thankfully, encryption is everywhere; human rights defenders, journalists and MPs use platforms like Signal, Telegram and WhatsApp to exchange everything from gossip to public interest data. Encryption is critical for investigative journalists who need to communicate with sources and to protect their investigations against hostile actors, whether states or criminal gangs.

And for all of us encryption has its uses: sending family photos and sharing personal information. After all, who hasn’t sent their bank details to a friend?

Telegram is used by activists, journalists and politicans. Photo: Christian Wiediger/Unsplash

For Index on Censorship, protecting encryption is a critical frontline in the fight for freedom of expression. Free speech isn’t just about the words themselves: it is the freedom to exchange information, the freedom to gather information and the freedom to confide ideas and thoughts to others without the risk of arrest and detention. Encryption is now central to our collective ability to exercise the right to freedom of expression.

Five years ago, Jamie Bartlett wrote for Index on Censorship about how his experience of police intimidation in Croatia, a democratic EU member state, changed his view on encryption. In Jamie’s case it offered a secure means of communicating with a source who the authorities had made it clear they did not want him to speak to.

Today, in too many states, encryption is now essential. As we speak the reality on the ground in authoritarian regimes including China, Hong Kong, Belarus and Russia, the difference between using an encrypted messaging app to express yourself, or unencrypted communications will mean the difference between freedom and imprisonment, if not worse.

Promoting and defending encryption is essential for any organisation that promotes and defends free speech. That’s why Index on Censorship is delighted to announce that we’ve received a grant from WhatsApp, the messaging app, to support our work in defending encryption. The grant of £150,000 will be used for our general work in defending digital freedom and our work streams will not be determined by any one other than my team at Index. From our perspective this grant is incredibly welcome as it will allow us to develop new content that explains the importance of encryption to the public, allows us to get new legal advice on why encryption should be protected as a fundamental defence of our human rights, as well as bringing new voices into the debate on why encryption is so critical to defend free speech.

As with any grant, the grantee has no influence whatsoever over Index on Censorship policy positions or our work itself. Index has had its criticisms of Meta (WhatsApp’s parent company) in the past and I’m sure we will in the future, and we’ll continue to speak freely to any government or company.

Right now, we’re continuing to argue for a pause to the UK government’s rush to push through its flawed Online Safety Bill, ensuring we have the opportunity to work with Ministers to amend the bill to remove the flawed ‘legal but harmful’ provisions in the legislation (as demolished by Gavin Millar QC’s power legal opinion for Index) and also ensure the potential undermining of encryption is taken out of this legislation.

We’ve got a lot to do – but the political weather is changing in the right direction.