Looking forward: Challenges facing online speech regulation in India

In India, the largest practical exercise in electoral politics the world has ever seen has just come to an end. Narendra Modi and his BJP party has been returned to power for an unprecedented third term, although without an outright majority. While there are many priorities facing the new administration, one of them will undoubtedly be modernising India’s outdated online regulatory framework.

The growth of internet access in India has been exponential. According to the Ministry of Electronics and Information Technology (MeitY), in 2000 5.5 million Indians were online; last year that number was 850 million. To look at India’s increasing economic and geopolitical clout is to also see a country willing to take on the tech giants to control India’s image online. The Indian government has not tiptoed around calling for platforms such as X and YouTube to remove content or accounts. According to the Washington Post, “records published by the Indian Parliament show that annual takedown requests for posts and accounts increased from 471 to 6,775 between 2014 and 2022, with those to Twitter soaring from 224 in 2018 to 3,417 in 2022.”

India’s online regulatory regime is over 20 years old and with the proliferation of online users and the emergence of new technologies, its age is starting to show. India is not alone in wrestling with this complex issue – just look at the Online Safety Act in the UK, the Digital Services Act (DSA) for the EU, as well as the ongoing discussions around Section 230 of the Communications Decency Act in the USA. Following the election, the current government has confirmed its intention to update and expand the regulation of online platforms, through the ambitious Digital India Act (DIA).

The DIA is intended to plug the regulatory gap and while the need is apparent, the devil will be in the detail. MeitY has stated that while the internet has empowered citizens, it has “created challenges in the form of user harm; ambiguity in user rights; security; women & child safety; organised information wars, radicalisation and circulation of hate speech; misinformation and fake news; unfair trade practices”. The government has hosted two consultations on the Bill and they reveal the sheer scale of the Indian government’s vision, covering everything from online harms and content moderation to artificial intelligence and the digitalisation of the government.

Protections against liability for internet intermediaries hosting content on their platforms – often called Safe Harbour – has long defined the global discussions around online free expression and this is a live question hanging over the DIA. During an early consultation on the Bill held in the southern city of Bengaluru, Minister of State for Information Technology Rajeev Chandrasekhar posed the question:

“If there is a need for safe harbour, who should be entitled to it? The whole logic of safe harbour is that platforms have absolutely no power or control over the content that some other consumer creates on the platform. But, in this day and age, is that really necessary? Is that safe harbour required?”

What would online speech policy look like without safe harbour provisions? It could herald in the near total privatisation of censorship, with platforms having to proactively and expansively police content to avoid liability. This is why the European safe harbour provisions included in the EU eCommerce Directive were left untouched during the negotiations around the DSA. With the Indian government highlighting the importance of the DIA in addressing the growing power of tech giants like Google and Meta, with Chandrasekhar stating in 2024 that “[t]he asymmetry needs to be legislated, or at the very least, regulated through rules of new legislation”, gifting tech companies power to decide what can and can’t be published online would surely represent an alarming recalibration that appears to run at odds with the Bill’s stated aims.

The changing approach to online expression is also evidenced in the slides used by the minister during the 2023 Bengaluru consultation. For instance, the internet of 2000 was defined as a “Space for good – allowing citizens to interact” and a “Source of Information and News”. But for MeitY, in 2023 it has curdled somewhat into a “Space for criminalities and illegalities” and a space defined by the “Proliferation of Hate Speech, Disinformation and Fake news.” This shift in perception also frames how the government identifies potential online harms. During the consultation, the minister stated that “[t]he idea of the Act is that what is currently legal but harmful is made illegal and harmful.” A number of harms were included in the minister’s presentation, highlighting everything from catfishing and doxxing, to the “weaponisation of disinformation in the name of free speech” and cyber-fraud tactics such as salami-slicing. This covers a universe of harms that each would require distinct and tailored responses and so questions remain as to how the DIA can adequately address all these factors, without adversely affecting internet users’ fundamental rights.

As a draft bill is yet to be published, there is no way of knowing what harms the DIA will contain. Without this, speculation has filled the vacuum. To illustrate this point, the Internet Freedom Foundation has compiled an expansive list of what the Bill could regulate collated solely from media coverage of the Bill from July 2022 until June 2023. This included everything from “apps that have addictive impact” and online gaming to deliberate misinformation and religious incitement material. What is also shrouded in darkness so far is how platforms or the state are expected to respond to these harms. As we have seen in the UK and across Europe, without clarity, full civil society engagement, and a robust rights framework, work to address online harms can significantly impact our right to free expression.

For now, the scope and scale of the government’s ambition can only be guessed at. For Index, the central question is, how can this be done while protecting the fundamental right of free expression, as outlined in Article 19 of the Indian Constitution and international human rights law? This is an issue of significant importance for everyone in India.

This is why Index on Censorship is kicking off a project to support Indian civil society engagement with the DIA to ensure it is informed by the experiences of internet users across the country, can respond to the learnings from other jurisdictions legislating on the same challenges and can adequately protect free expression. We will be engaging with key stakeholders prior to and during the consultation process to ensure that everyone’s right to speak out and speak up online, on whichever platform they choose, is protected.

If you are interested to learn more about this work please contact [email protected]  

Last year, we published an issue of Index dedicated to issues related to free expression in India. Read it here.