15 Aug 2025 | Europe and Central Asia, News and features, United Kingdom
The Online Safety Act could have been worse. When it was still a bill, it included a provision around content deemed “legal but harmful”, which would have required platforms to remove content that, while not illegal, might be considered socially or emotionally damaging. We campaigned against it, arguing that what is legal offline must remain legal online. We were successful – “legal but harmful” did not make the final cut.
Still, many troubling clauses did make their way into the Act. And three weeks ago, when age verification rules came into force, people across the UK began to see the true scope of the OSA, a vast piece of legislation which already is curtailing our online rights.
Setting aside the question of how effective some of these measures are (how easy is it, really, to age-gate when kids can just use VPNs, as we saw a few weeks back?), many of our concerns focus on privacy.
Privacy is essential to freedom of expression. If people feel they are being monitored, they change how they speak and behave. Of course, there is a balance. We use Freedom of Information requests to hold power to account, so that matters of national importance aren’t hidden behind closed doors. But that doesn’t mean all speech should be open to scrutiny. People need private space, online as well as off. It’s a basic right, and for good reason.
We’ve landed in a strange place in 2025. Never before in human history have we had such powerful tools to access people’s inner lives. But just because we can doesn’t mean we should. The OSA empowers regulators and platforms to use those tools, mostly in the name of child safety (with national security also a stated goal, albeit one that seems secondary), and that’s not good.
To be clear: I empathise with concerns around child safety. We all want an internet that is safer for children. But from every conversation I’ve had, and every piece of research I’ve seen, it won’t make much of a difference to the online experience of our children. There are too many loopholes and the only way to close them all is to further encroach on the privacy of us all. Even then there will still be get-arounds.
What does a less private internet look like? Just consider a few ways we use it: we send sensitive data, like bank details, ID documents and health records, to name just three. That data needs to be private. We talk online about our personal lives. In a tolerant, pluralistic society, this may seem unthreatening, but not everyone lives in such a society. Journalists speak to sources via apps offering end-to-end encryption of messages. Activists connect with essential networks on them too. At Index we use them all the time.
The OSA is already eroding privacy. Privacy is being compromised by the OSA’s age-gating requirement under Section 81, which mandates that regulated providers use age-verification measures to ensure children – defined as those under 18 – don’t encounter pornographic content.
This means major platforms like TikTok, X, Reddit, YouTube and others must comply. Several sites already have profiles of us, based on information we’re had to upload to register, and the tracking of our online habits and patterns. Now our profiles will grow bigger still, and with details like our passports and driving licences. Although the OSA says age verification information should not be stored we already know that tech is not infallible and this additional data could be extremely powerful in the wrong hands. We’ve seen enough major data breaches to know this isn’t a worse-case abstraction.
But it could get worse. Section 121 of the OSA gives Ofcom the power to require tech companies to use “accredited technology” to scan for child abuse or terrorism-related content, even in private messages. Under the OSA, technology is considered “accredited” if it has been approved by Ofcom, or a person designated by Ofcom, as meeting minimum standards of accuracy for detecting content related to terrorism or child abuse. These minimum standards are set by the Secretary of State. By allowing the government to mandate or endorse scanning technology – even for these serious crimes – the OSA risks creating a framework for routine, state-sanctioned surveillance, with the potential for misuse. Indeed, while the government made assurances that this wouldn’t undermine end-to-end encryption, the law itself includes no such protection. Instead, complying with these notices could require platforms to break encryption, either through backdoors or invasive client-side scanning. Ofcom has even flagged encryption itself as a risk factor. The message to tech companies is clear: break encryption to show you’re doing everything possible. If a company doesn’t, and harmful content still slips through, they could be fined up to 10% of their annual global revenue. They’re damned if they do and damned if they don’t.
Yes we want a safer internet for our children. I wish there were a magic bullet to eliminate harm online. This isn’t it. Instead, clauses within the OSA risk making everyone less safe online.
Sometimes we feel like a broken record here. But what choice do we have, when the attacks keep coming? And it’s not just the OSA. The Investigatory Powers Act, formerly dubbed the “Snooper’s Charter”, has also been used to demand backdoors into devices, as we saw with Apple earlier this year.
So, we’re grateful that WhatsApp recently renewed a grant to support our work defending encryption and our privacy rights. As always, our funders have no influence on our policy positions and we will continue to hold Meta (WhatsApp’s parent company) to account just as we do any other entity. What we share is a core belief: privacy is a right and should be protected. And at Index we work on a core principle: human rights are hard won and easily lost. Now is not the time to give up on them – it’s the time to double-down.
30 Jul 2025 | News and features, United Kingdom
The introduction of the Online Safety Act’s child protection provisions last week has reignited serious concerns about the future of free expression online in the UK. Many companies must now introduce safety measures to protect children from harmful content, typically via age-checking procedures. This includes pornography sites, but also includes big social media platforms, who could be required to use “highly effective” age checks to identify under-18 users in order to comply with the Act. Not only will the provisions impact under-18s’ ability to access information online, they will – by default – limit anyone who refuses to verify their age on certain sites.
The Act risks overreach, creating a chilling effect on legitimate speech. In the lead-up to the Act’s passage in 2023, we were vocal about our concerns around certain aspects of it and were pleased to see the clause around “legal but harmful” removed. We remain concerned about end-to-end encryption, which is not sufficiently protected in the Act’s wording. Our concerns go beyond encryption though, as the child protection provisions reveal.
Overall, we fear the Act opens up too many avenues for increased surveillance and monitoring, all of which fosters an environment of self-censorship, stifles open dialogue and erodes the right to free expression and access to information. The fact that the age limitations specifically target young people is doubly concerning when you consider that the UK plans to lower the voting age. It has the potential to limit young people’s access to information and their ability to participate in democratic life.
Creating a safer internet for young people is a noble cause and we don’t criticise the intentions of those behind the Act. We do though take issue with the aforementioned and whether it indeed does make the internet any safer. And with that in mind please do read more about the negative implications of the age verification system introduced by the OSA, as argued by James Ball, political editor of the New European, in his piece on Substack earlier this week. We are re-publishing it here with his permission.
Okay, so age verification is pretty painless. It’s still not a good thing. At all.
Two years ago, British politicians passed the Online Safety Act, a wide-ranging law which – among many other measures – introduced widespread age verification for anyone wishing to access “adult” content online.
This sort of measure is always very popular, because it’s easy to make opposing it look bad: why do you want children to be able to access porn online? Many supporters of this kind of bill are all too eager to jump to that kind of argument, and do so shamelessly – it’s presented as obvious and agreeable. Decent people want to protect children online. This measure protects children online. So…who would oppose it?
Despite that, the minority of us who do oppose these kinds of measures tend to be quite vocal, sometimes to the point of exaggeration. At one point, the UK’s age verification was going to be for specialist adult sites only – meaning that verifying your age was essentially an admission you wanted to watch porn.
That could have created blackmail potential, even within a secure system – if someone could access which bank card had been used to verify age for a domain showing gay porn, just that information alone might be useful. But as it happens, it is being rolled out more broadly: Bluesky, for example, is requiring it for anyone to use the DM function. This means it affects far more people, but does mean the fact of being age verified can’t be used to shame anyone. That’s probably good.
Similarly, there are numerous posts going viral suggesting that the age verification law is resulting in Reddit search results being more anti-LGBT, and some are suggesting that was even the intent of the legislation (despite the law being passed by a different government than the one now in office). The basic factual claim here is false: Reddit search results haven’t been altered by the legislation.
This is just another version of the online chain letters that do the rounds now and then – like messages saying you need to copy/paste certain text to stop Facebook’s new privacy policy applying to you (never true), or the one that went around the other week about WeTransfer, which was also almost entirely false/misunderstood.
Anyway, let’s get into the realities of the new age verification regime.
The good: it’s quick, easy and pretty secure
So far, the only website that’s asked me to verify my age is Bluesky. It has, like almost every site affected by the law will, outsourced this to a third-party provider, who offers multiple quick ways to verify – which for most people means either a quick automated confirmation using a live image, or else a check with a bank card.
In my case, the technology took an insultingly short amount of time to confirm that the haggard 30-something in front of it was clearly an adult, and the process was completed in less than a minute. The verifier promises to delete all images and data used in the process, relaying only the successful result to the site.
This is a good system, but there is a long track record of services saying that they don’t store personally identifiable information, and then accidentally storing it anyway – which tends to only emerge later, after they’re hacked. But hopefully the companies involved in this one are aware of the heightened scrutiny on them with this legislation and have audited everything more carefully.
So…what’s not to like about this process? If you’re an adult trying to verify yourself, this is about the best version of things. It’s not difficult, it’s not intrinsically intrusive, and it’s fast. This was enough to have quite a few people – including some friends of mine – post their “I told you so” takes about why age verification was fine, actually. I’m not there yet.
The bad, part one: it’s quick and easy to avoid, too
The stated aim of age verification is to protect children and teenagers from inappropriate content – this usually means sexual content, but can also be extended to include violent online imagery and video.
Broadly speaking, there are two separate groups we are trying to protect here – younger children and teens who might accidentally or unwittingly encounter inappropriate content, and older teens who are deliberately seeking it out. Age verification doesn’t work very well for either.
Evidence – including that collected by the regulator Ofcom itself – consistently shows that when younger children (typically age 10-14) encounter adult content they don’t wish to see, they overwhelmingly see it via messaging apps, typically from their peers. Most of these apps aren’t supposed to be used by under-13s, but sites barely enforce this requirement and many parents don’t supervise it.
The current age verification rules do almost nothing to help protect this group. There are some people calling for under-16s (or even under-18s) to be barred from messaging apps – or even all social media – entirely. That’s a legitimate position, but one I personally find ridiculous: life is lived online now.
If we try to keep young people away from it, they will be woefully underskilled, undersocialised, and unprepared for the world they’ll first encounter as 16-year-olds and 18-year-olds. A phased, parentally supervised introduction to the internet is clearly the only way through here. Too much of this debate feels like efforts to outsource parenting to social media companies.
So much for the younger children who might be accidentally exposed to adult content. What about older teenagers who are trying to find it, who might be stopped by age verification? The short answer is that teenagers are very good at avoiding anything that stands between them and porn – especially when they’re often more tech savvy than their parents.
The UK’s age verification requirement can be bypassed simply by downloading a VPN, which lets you spoof where your traffic is coming from – if you use a VPN and say you’re browsing from the USA, the age requirement prompts vanish immediately. At the time of writing, VPN apps are in the 1st, 2nd, 4th, 5th, 6th and 9th spots in Apple’s app store. Go figure.
Making VPNs illegal is the stuff of dictators (and would also be terrible for corporate remote workers and other legitimate business use purposes), so they are likely to hang around as an effortless way to avoid age verification. In the short term, the technology can also be fooled by various simple tricks.
At the moment, using photo mode in the game Death Stranding fools age verification – and since the service doesn’t save the photo, presumably if it works once there is no way to tell how many people falsely verified themselves in this way. This loophole will doubtless be closed, but new ones will be found just as quickly. Again, the government is trying to do through regulation and tech quick fixes what can only practically be achieved through parental supervision.
The bad, part two: it creates new problems
Using a paid VPN is good for your online security – it can help restrict tracking and protect you from sites trying to steal your card details. But teenagers downloading and using VPNs will inevitably be looking for free services, and these are a very different story.
At best, they’re monetising by selling browsing data, showing questionable ads, or some similar practice. But malicious software often poses as VPNs and is then used to harvest and steal credentials used while the VPN is running – which might include the bank or card details of parents using the same laptops, phones or networks.
Not every teen is going to be tech savvy or connected enough to set up a VPN, but others will try different ways to avoid age verification tech. That means a lot of them will look for small or niche adult sites, who haven’t bothered trying to comply with the law – unlike the relatively ‘respectable’ mainstream adult companies. This does mean that one unintended consequence of age verification could be sending teens towards more extreme adult content than they would otherwise deliberately seek out.
This is going to do some serious damage, and there will be deliberate criminal enterprises working to target teenagers looking to circumvent age verification. While those people are responsible for their criminal acts, we shouldn’t forget that they’re a direct consequence of the legislation, either.
The bad, part three: it won’t stop at age verification
If you’ve read this far, you hopefully get the impression that I think the current system of age verification is mostly harmless, but also largely pointless – I don’t think it will do anything to make the internet safer.
But that in itself is part of the problem: the policy’s advocates won’t take failure as a sign that the approach is wrong. They will instead frame it as proof the policy doesn’t go far enough. Much of this is sincere campaigning on this issue, but it is also deliberately exploited by the UK’s intelligence agencies as part of their efforts to regain surveillance capabilities in the online era.
I recognise this makes me sound like someone who wears a tinfoil hat, so let me give one qualifier here: I don’t think intelligence agencies do this as part of a nefarious Deep State agenda. I think they are legitimately working to keep the UK safe, and their inability to access all messaging on the internet feels like an obstacle to that. I don’t assume any bad faith on their part.
GCHQ had a programme called “Mastering The Internet”, which we revealed during Edward Snowden’s revelations. It was more-or-less what it sounded like: GCHQ wanted to be able to access everything on the internet so that it would be able to find the bad stuff it needed to keep people safe.
In reality, this approach has consistently failed: when asked to evidence what US plots had been foiled thanks to mass surveillance programmes specifically, the American agencies could only come up with a single $8,000 donation to a proscribed terror group, a terrible return on a multi-billion dollar investment. Targeted surveillance works. Mass surveillance is a concerted effort by agencies trying to find a needle in a haystack to make that haystack bigger.
You may or may not agree with me on mass surveillance, but it is the case that since end-to-end encryption has become the default online, intelligence agencies are very keen to find ways to circumvent it – and to make the internet possible to monitor again.
The Home Office and intelligence agencies have consciously and deliberately put child protection at the forefront of these broader efforts, because it’s the easiest argument to win. When they push for measures that would help all of their surveillance goals, they frame it in terms of protecting children or tracking down people who view child sex abuse material online. The Home Office’s efforts to do this have occasionally bordered on the ridiculous, as I’ve reported before.
Trying to require us to use our real-life verified identity whenever we browse online would be a difficult political ask to do in one go. That’s why the efforts are incremental – first you introduce age verification, which is quick, painless and ineffective. When it doesn’t work, you go one step further, asking them to tie an identity token to that verification and allow it to be used for serious crime. In small and measured increments, you can end online anonymity – at least so far as the government is concerned.
So what? I don’t need online anonymity anyway
Perhaps you don’t! But we do generally have anonymity offline and most of us like it that way. In the UK, we aren’t required to carry ID with us, and even in countries where people do, it’s not out on display – when we’re out in the real world, people who know us can identify us and to everyone else we’re just a stranger.
It’s this that lets us talk and relax freely in public places: we can have a private conversation in a café or pub without worrying too much about being overheard, because even if the person at the next table is listening in, they don’t know who we are. Offline interactions are fleeting, without a permanent record.
The internet is different. There is no shortage of people who’ve faced ‘cancellation’ or consequences for casual online conversations on social media from ten or fifteen years’ previous. What is said there is forever, and that comes with social consequence even for speech that’s perfectly legal.
I do a job in which I’m paid to have opinions in public, and part of what goes along with that is putting up with the consequences of it. Some people will disagree with your opinions, sometimes aggressively so. Some of those will decide as a consequence that they hate you as a person. Sometimes that even spills over into the real world.
I’m largely fine about that, because it’s part of the career I chose. But most of us choose not to have opinions in public – and that’s before we start thinking about whether it could affect our employment, or other aspects of our life.
That doesn’t mean we don’t have opinions that we share with friends or families. Most of us want to be able to have relaxed conversations off-guard – and some degree of online anonymity or pseudonymity is essential for that.
Publicly connecting our online presence with our real identity is essentially condemning ourselves to a future of relentless scrutiny and self-censorship. This should not be a future any of us want.
The idea of tying our online identity to real-world ID only the government can see is much more compelling to people, but it honestly amazes me this is so. In 2013 as we reported on documents released by Edward Snowden, we would constantly hear American liberals shrug off what we found – saying, essentially, that they trusted the government needed those powers, and accusing us of scaremongering when we invited them to imagine those powers in the wrong hands. Less than four years later, Donald Trump was elected. I won’t labour that point.
People aren’t scaremongering when they say that the UK criminalises speech too much in the online world, even if certain elements of the British right exaggerate the problem.
More than 1,000 people are arrested every month over something they say online on social media, and that’s more than doubled in a decade. Most of those arrests lead to no further action, and the overwhelming majority of the rest result in nothing more than cautions – but this isn’t a small number and isn’t a zero risk.
People just trying to comment on politics, tv, or something else might fear a knock at the door and censor themselves. People deserve the same speech rights online as they have offline, both in the letter of the law and in terms of how freely they feel able to express themselves in practice.
Tackling criminally abusive speech online is important, but so is allowing free speech – a fundamental human right – in a democracy. When I look at the first few days of age verification, I don’t look at it and think “problem solved”, I see the thin end of the wedge – on its own it’s not particularly harmful, and largely useless. But as the shape of things to come, it’s a step in a bad direction.
This raised an eyebrow as I have a blue tick on there, suggesting Bluesky believes it has verified my identity as a Proper Person according to whatever mysterious criteria qualify you for a tick. But they simultaneously thought I might be a child?
20 Sep 2024 | News and features, United Kingdom
In August 2021, when the Taliban took over Kabul and home searches became ubiquitous, women started to delete anything they thought could get them in trouble. Books were burned, qualifications were shredded, laptops were smashed. But for 21 members of a women’s creative writing group, a lifeline remained: their WhatsApp group. Over the next year they would use this forum to share news with one another (a story that has since been chronicled in the recently published book My Dear Kabul, which was published by Coronet and is an Untold Narratives project, a development programme for marginalised writers). Doing so through WhatsApp was not incidental. Instead the app’s use of end-to-end-encryption provided a strong level of protection. The only way the Taliban would know what they were saying was if they found their phones, seized them, forced them to hand over passwords and went into their accounts. They could not otherwise read their messages.
End-to-end encryption is not sexy. Nor do those four words sound especially interesting. It’s easy to switch off when a conversation about it starts. But as this anecdote shows it’s vitally important. Another story we recently heard, also from Afghanistan: a man hid from the Taliban in a cave and used WhatsApp to call for help. Through it, safe passage to Pakistan was arranged.
It’s not just in Afghanistan where end-to-end encryption is essential. At Index we wouldn’t be able to do our work without it. We use encrypted apps to message between our UK-based staff and to keep in touch with our network of correspondents around the world, from Iran to Hong Kong. We use it to keep ourselves safe and we use it to keep others safe. Our responsibility for them is made manifest by our commitment to keep our communication and their data secure.
Beyond these safety concerns we know end-to-end encryption is important for other reasons: It’s important because we share many personal details online, from who we are dating and who we vote for to when our passport expires, what our bank details are and even our online passwords. In the wrong hands these details are very damaging. It’s important too because privacy is essential both in its own right and as a guarantor of our other fundamental freedoms. Our online messages shouldn’t be open to all, much as our phone lines shouldn’t be tapped. Human rights defenders, journalists, activists and MPs message via platforms like Signal and WhatsApp for their work, as do people more broadly who are unsettled by the principle of not having privacy.
Fortunately, today accessible, affordable and easy-to-use encryption is everywhere. The problem is its future looks uncertain.
Last October, the Online Safety Act was passed in the UK, a sprawling piece of legislation that puts the onus on social media firms and search engines to protect children from harmful content online. It’s due to come into force in the second half of 2025. In it, Section 121 gives Ofcom powers to require technology companies to “use accredited technology” that could undermine encryption. At the time of the Act’s passage, the government made assurances this would not happen but comments from senior political figures like Sadiq Khan, who believe amendments to the acts are needed, have done little to reassure people.
It’s not just UK politicians who are calling for a “back door”.
“Until recently, traditional phone tapping gave us information about serious crime and terrorism. Today, people use Telegram, WhatsApp, Signal, Facebook, etc. (…) These are encrypted messaging systems (…) We need to be able to negotiate what you call a ‘back door’ with these companies. We need to be able to say, ‘Mr. Whatsapp, Mr. Telegram, I suspect that Mr. X may be about to do something, give me his conversations,’” said French Interior Minister Gérald Darmanin last year.
Over the last few years police across Europe, led by French, Belgium and Dutch forces, have breached the encryption of users on Sky ECC and EncroChat too. Many criminals were arrested on the back of these hacking operations, which were hailed a success by law enforcement. That may be the case. It’s just that people who were not involved in any criminal activity would also have had their messages intercepted. While on those occasions public outcry was muted, it won’t be if more commonly used tools such as WhatsApp or Signal are made vulnerable.
Back to the UK, if encryption is broken it would be a disaster. Not only would companies like Signal leave our shores, other nations would likely follow suit.
For this reason we’re pleased to announce the launch of a new Index campaign highlighting why encryption is crucial. WhatsApp, the messaging app, have kindly given us a grant to support the work. As with any grant, the grantee has no influence over our policy positions or our work (and we will continue to report critically on Meta, WhatsApp’s parent company, as we would any other entity).
We’re excited to get stuck into the work. We’ll be talking to MPs, lawyers, people at Ofcom and others both inside and outside the UK. With a new raft of MPs here and with conversations about social media very much in the spotlight everywhere it’s a crucial moment to make the case for encryption loud and clear, both publicly and, if we so chose, in a private, encrypted forum.
6 Jun 2024 | Asia and Pacific, India, News and features
In India, the largest practical exercise in electoral politics the world has ever seen has just come to an end. Narendra Modi and his BJP party has been returned to power for an unprecedented third term, although without an outright majority. While there are many priorities facing the new administration, one of them will undoubtedly be modernising India’s outdated online regulatory framework.
The growth of internet access in India has been exponential. According to the Ministry of Electronics and Information Technology (MeitY), in 2000 5.5 million Indians were online; last year that number was 850 million. To look at India’s increasing economic and geopolitical clout is to also see a country willing to take on the tech giants to control India’s image online. The Indian government has not tiptoed around calling for platforms such as X and YouTube to remove content or accounts. According to the Washington Post, “records published by the Indian Parliament show that annual takedown requests for posts and accounts increased from 471 to 6,775 between 2014 and 2022, with those to Twitter soaring from 224 in 2018 to 3,417 in 2022.”
India’s online regulatory regime is over 20 years old and with the proliferation of online users and the emergence of new technologies, its age is starting to show. India is not alone in wrestling with this complex issue – just look at the Online Safety Act in the UK, the Digital Services Act (DSA) for the EU, as well as the ongoing discussions around Section 230 of the Communications Decency Act in the USA. Following the election, the current government has confirmed its intention to update and expand the regulation of online platforms, through the ambitious Digital India Act (DIA).
The DIA is intended to plug the regulatory gap and while the need is apparent, the devil will be in the detail. MeitY has stated that while the internet has empowered citizens, it has “created challenges in the form of user harm; ambiguity in user rights; security; women & child safety; organised information wars, radicalisation and circulation of hate speech; misinformation and fake news; unfair trade practices”. The government has hosted two consultations on the Bill and they reveal the sheer scale of the Indian government’s vision, covering everything from online harms and content moderation to artificial intelligence and the digitalisation of the government.
Protections against liability for internet intermediaries hosting content on their platforms – often called Safe Harbour – has long defined the global discussions around online free expression and this is a live question hanging over the DIA. During an early consultation on the Bill held in the southern city of Bengaluru, Minister of State for Information Technology Rajeev Chandrasekhar posed the question:
“If there is a need for safe harbour, who should be entitled to it? The whole logic of safe harbour is that platforms have absolutely no power or control over the content that some other consumer creates on the platform. But, in this day and age, is that really necessary? Is that safe harbour required?”
What would online speech policy look like without safe harbour provisions? It could herald in the near total privatisation of censorship, with platforms having to proactively and expansively police content to avoid liability. This is why the European safe harbour provisions included in the EU eCommerce Directive were left untouched during the negotiations around the DSA. With the Indian government highlighting the importance of the DIA in addressing the growing power of tech giants like Google and Meta, with Chandrasekhar stating in 2024 that “[t]he asymmetry needs to be legislated, or at the very least, regulated through rules of new legislation”, gifting tech companies power to decide what can and can’t be published online would surely represent an alarming recalibration that appears to run at odds with the Bill’s stated aims.
The changing approach to online expression is also evidenced in the slides used by the minister during the 2023 Bengaluru consultation. For instance, the internet of 2000 was defined as a “Space for good – allowing citizens to interact” and a “Source of Information and News”. But for MeitY, in 2023 it has curdled somewhat into a “Space for criminalities and illegalities” and a space defined by the “Proliferation of Hate Speech, Disinformation and Fake news.” This shift in perception also frames how the government identifies potential online harms. During the consultation, the minister stated that “[t]he idea of the Act is that what is currently legal but harmful is made illegal and harmful.” A number of harms were included in the minister’s presentation, highlighting everything from catfishing and doxxing, to the “weaponisation of disinformation in the name of free speech” and cyber-fraud tactics such as salami-slicing. This covers a universe of harms that each would require distinct and tailored responses and so questions remain as to how the DIA can adequately address all these factors, without adversely affecting internet users’ fundamental rights.
As a draft bill is yet to be published, there is no way of knowing what harms the DIA will contain. Without this, speculation has filled the vacuum. To illustrate this point, the Internet Freedom Foundation has compiled an expansive list of what the Bill could regulate collated solely from media coverage of the Bill from July 2022 until June 2023. This included everything from “apps that have addictive impact” and online gaming to deliberate misinformation and religious incitement material. What is also shrouded in darkness so far is how platforms or the state are expected to respond to these harms. As we have seen in the UK and across Europe, without clarity, full civil society engagement, and a robust rights framework, work to address online harms can significantly impact our right to free expression.
For now, the scope and scale of the government’s ambition can only be guessed at. For Index, the central question is, how can this be done while protecting the fundamental right of free expression, as outlined in Article 19 of the Indian Constitution and international human rights law? This is an issue of significant importance for everyone in India.
This is why Index on Censorship is kicking off a project to support Indian civil society engagement with the DIA to ensure it is informed by the experiences of internet users across the country, can respond to the learnings from other jurisdictions legislating on the same challenges and can adequately protect free expression. We will be engaging with key stakeholders prior to and during the consultation process to ensure that everyone’s right to speak out and speak up online, on whichever platform they choose, is protected.
If you are interested to learn more about this work please contact [email protected]
Last year, we published an issue of Index dedicated to issues related to free expression in India. Read it here.