How artificial intelligence is influencing elections in India

It has less than six months since Divyendra Singh Jadoun, the 31-year-old founder of an artificial intelligence (AI) powered synthetic media company, started making content for political parties in India. Within this short time he has risen to be known as the “Indian Deepfaker” as several political parties across the ideological spectrum reach out to him for digital campaigning.

Jadoun’s meteoric rise has a lot to do with the fact that close to a billion people are voting in India’s elections, the longest and largest in the world, which started last month. He says he doesn’t know of a single political party that hasn’t sought him out to enhance their outreach. “They [political parties] don’t reach out to us directly, though. Their PR agencies and political consultants ask us to make content for them,” said Jadoun, who runs the AI firm Polymath, based in a small town known for its temples in the north Indian state of Rajasthan and which has nine employees.

In India’s fiercely divided election landscape, AI has emerged as a newfound fascination, particularly as the right-wing ruling Bharatiya Janata Party (BJP) vies for an unusual third consecutive term. The apprehension surrounding technology’s capabilities in a nation plagued by misinformation has raised concerns among experts.

Jadoun says his team has been asked many times to produce content which they find highly unethical. He has been asked to fabricate audio recordings that show rival candidates making embarrassing mistakes during their speeches or to overlay opponents’ faces onto explicit images.

“A lot of the content political parties or their agents ask us to make is on these lines, so we have to say no to a lot of work,” Jadoun told Index on Censorship.

Certain campaign teams have even sought subpar counterfeit videos from Jadoun, featuring their own candidate, which they intend to deploy to discredit any potentially damaging authentic footage that surfaces during the election period.

“We refuse all such requests. But I am not sure if every agency will have such filters, so we do see a lot of misuse of technology in these elections,” he says.

“What we offer is simply replacing the traditional methods of campaigning by using AI. For example, if a leader wants to shoot a video to reach out to each and every one of his party members, it will take a lot of time. So we use some parts of deep-fakes to create personalised messages for their party members or cadres,” Jadoun adds.

Pervasive use

India’s elections are deeply polarised and the ruling right-wing BJP has employed a vicious anti-minority campaign to win over the majority Hindu voters- who roughly form 80% of the electorate. The surge in use of AI reflects both its potential and the concerns, amidst widespread misinformation. A survey by cybersecurity firm McAfee, taken last year, found that over 75% of Indian internet users have encountered various types of deepfake content while online.

Some of the most disturbing content features various dead politicians have been resurrected through AI to sway voters. Earlier this year, regional All India Anna Dravida Munnetra Kazhagam Party’s (AIADMK) official account shared an audio clip featuring a virtual rendition of Jayalalithaa, a revered Tamil political figure who passed away in 2016. In the speech, her AI avatar aimed to inspire young party members, advocating for the party’s return to power and endorsing current candidates for the 2024 general elections.

Jayalalithaa’s AI resurrection is not an isolated case.

In another instance, just four days prior to the start of India’s general election, a doctored video appeared on Instagram featuring the late Indian politician H Vasanthakumar. In the video, Vasanthakumar voices support for his son Vijay Vasanth, a sitting Member of Parliament who is contesting the election in his father’s erstwhile constituency.

The ruling Bharatiya Janata Party (BJP), known for its use of technology to polarise voters, has also shared a montage showcasing Prime Minister Modi’s accomplishments on its verified Instagram profile. The montage featured the synthesized voice of the late Indian singer Mahendra Kapoor, generated using AI.

Troll accounts subscribing to the ideology of different political parties are also employing AI and deepfakes to create narratives and counter-narratives. Bollywood star Ranveer Singh in a tweet last month cautioned his followers to be vigilant against deepfakes as a manipulated video circulated on social media platforms, where Singh appeared to criticise Modi. Using an AI-generated voice clone, the altered video falsely portrayed Singh lambasting Modi over issues of unemployment and inflation, and advocating for citizens to support the main opposition party, the Indian National Congress (INC). In reality, he had praised Modi in the original video.

“AI has permeated mainstream politics in India,” said Sanyukta Dharmadhikari – deputy editor of Logically Facts, who leads a team of seven members to fact-check misinformation in different vernacular languages.

Dharmadhikari says that countering disinformation or misinformation becomes extremely difficult in an election scenario as false information consistently spreads more rapidly than fact-checks, particularly when it aligns with a voter’s confirmation bias. “If you believe a certain politician is capable of a certain action, a deepfake portraying them in such a scenario can significantly hinder fact-checking efforts to dispel that misinformation,” she told Index on Censorship.

Selective curbs

Amidst growing concerns, the Indian government rushed to regulate AI by asking tech companies to obtain approval before releasing new tools, just a month before elections. This is a substantial shift from its earlier position when it informed Indian Parliament of not interfering in how AI is being used in the country. Critics argue that the move might be another attempt to selectively weigh down on opposition and limit freedom of expression. The Modi government has been widely accused of abusing central agencies to target the opposition while overlooking allegations involving its own leaders or that of its coalition partners.

“There needs to be a political will to effectively regulate AI, which seems amiss,” says Dharmadhikari. “Even though the Information Ministry at first seemed concerned at the misuse of deepfakes, but gradually we have seen they have expressed no concerns about their dissemination especially if something is helping [PM] Modi,” she added.

Chaitanya Rohilla, a lawyer based in Delhi, who initiated a Public Interest Litigation (PIL) at the Delhi High Court concerning the unregulated use of AI and deepfakes in the country believes that as technology unfolds at breakneck speed, the need for robust legal frameworks to safeguard against AI’s emerging threats is more pressing than ever.

“The government is saying that we are working on it…We are working on rules to bring about or to specifically target these deepfakes. But the problem is the pace at which the government is working, it is actually not in consonance with how the technology is changing,” Rohilla told Index on Censorship.

Rohilla’s PIL had requested the judiciary to restrict access to websites that produce deepfakes. The proposal suggested that such websites should be mandated to label AI-generated content and be prohibited from generating illicit material.

But Indian courts have refused to intervene.

“The information Technology Act that we have in our country is not suitable; it’s not competent to handle how dynamically the AI environment is changing. So as the system is unchecked and unregulated it (deepfake dissemination) would just keep on happening and happening.”

The journalists being threatened with artificial intelligence

The first time Gharidah Farooqi became a target of tech-facilitated gender-based violence (TFGBV) was in 2014. She was working as a reporter at Samaa, a private Pakistani television channel, and covering cricketer-turned politician Imran Khan’s anti-government protest that set off from the eastern city of Lahore in Punjab province to the country’s capital, Islamabad. 

“I was there 24/7 on the ground and would go to the hotel just to take a few hours of rest,” she told Index.

“My morphed photos from the field coverage were posted on social media along with sexist and vulgar comments,” recalled Farooqi, who is currently working as a senior anchor at GTV, another private Pakistani TV channel. 

“For the longest of time, I ignored it, but not in [my] wildest imagination had I foreseen a Frankenstein in the making,” said Farooqi, adding that people were not used to seeing a woman reporter in the field. 

“For them it was just shughal [making fun] of me,” she said.

A decade later, the attacks have not stopped. In fact, they have taken on an even uglier and more dangerous shape through generative artificial intelligence (AI), which uses models to create new content.

“Generative AI is making TFGBV even more difficult to address,” explained Nighat Dad, a lawyer and internet activist who runs the not-for-profit organisation Digital Rights Foundation (DRF), which helps Pakistanis fight against online harassment.

She said that this technology gives the creator power to change the original image, text, audio or video very quickly, in a way that makes it hard to identify whether it is an original or a deepfake, terming it “photoshopping in a more sophisticated manner”.

For Farooqi, “the period between the calm and chaos” is so short, she barely gets any respite. Along with the organised campaign by political parties’ supporters, there is a daily barrage of abuse on her social media pages, she said, adding: “It is not mere trolling; trolling is a very harmless word compared to what I’m facing.”

She’s not the only one to have had a taste of this form of violence. 

“The prime targets are of course women, although a few men have also been targeted,” said Farooqi. Many have reached out to her, “mostly for emotional support” and to ask her how to seek legal help. 

In her experience, female colleagues have always supported each other, and supported her in particular, for which she says she’s “eternally grateful”.

“Gharidah faces [more] attacks than any other journalist,” said Dad, who is constantly being contacted for help by women journalists. 

The DRF has a helpline and a resource kit that offers a list of places offering help. Between January and November 2023, 22 female and 14 male journalists reached out to DRF with complaints including blasphemy accusations, abusive messages, bullying, blackmailing, censorship, defamation, GBV, impersonation, online stalking, phishing, sexual harassment and threats of physical violence.

While Farooqi has learnt to navigate the legal mechanisms and lodge complaints, not everyone will be as astute in warding off cyber harassment.

The Prevention of Electronic Crimes Act 2016 “carves out certain offences such as morphing of pictures or videos etc., which is done using tech tools”, according to Farieha Aziz, a cybercrime expert and co-founder of Bolo Bhi, an advocacy forum for digital rights. 

But the “manner in which online harassment cases are executed and dealt with, despite complaints being lodged and arrests being made, remains problematic due to a lack of priority by the Federal Investigation Agency [FIA]. Either these women do not hear back or there is no progress on complaints they’ve made, at various stages of the case,” Aziz said.

After filing complaints eight times, Farooqi said, it was only on her most recent complaint (made last month), that any action was taken by the FIA when her personal details – her home address and her telephone number – were made public.

“I started getting anonymous calls and messages threatening me with rape and even death warnings. It was the first time that the agency took swift action and got the posts deleted,” she said.

The Karachi-based Centre of Excellence in Journalism has produced a safety kit for women journalists “on how to protect themselves and where and how to report,” Aziz told Index, adding that they also provide counselling.

“There has been pervasive and persistent online harassment, sexualised and otherwise gendered disinformation faced by women journalists in Pakistan, with many being threatened with physical assault and offline violence. We’ve witnessed multiple incidents of female journalists’ private information being leaked online with what we can say are well-planned and directed efforts to silence them and [which] resulted in stalking and offline harassment,” said a statement by the Pakistan-based Network of Journalists for Digital Rights earlier this month, condemning the use of TFGBV and generative AI to attack female journalists.

Farooqi considers generative AI as yet another weapon to silence and subdue women journalists. Claiming to be a woman with “nerves of steel”, she said she has to be thick-skinned to be able to survive these attacks. To keep sane, she advises people to never engage with attackers.

Why a manipulated image of a politician is the tip of the AI-ceberg

In an era dominated by the instantaneous dissemination of information, the role of all forms of media, traditional and social, in shaping public perception is unparalleled. However, as we navigate this digital age, an unsettling trend has emerged – the manipulation of images by media outlets. You would think this was best left to tyrants and propagandists – yes, I am thinking of Stalin.

This move marks an ominous threat to freedom of expression not just because it’s clearly wrong but also even small instances can undermine our collective faith in the veracity of mainstream media. The very essence of truth when trusted sources, such as print media, engage in the doctoring of visual content undermines the integrity of our news. In an age where authenticity is paramount, the subtle yet powerful influence of manipulated images not only distorts reality but also raises questions about the integrity of journalism.

Social media, with its rapid circulation of information and the immediacy of user-generated content, can become a breeding ground for false information and manipulated images. The allure of likes, shares, and engagement metrics has driven a disturbing trend where sensationalism seems to take precedence over accuracy.

As traditional media outlets grapple with the evolving landscape, there’s a cautionary tale to be heeded. While the pressure to maintain relevance in a digitally driven world is real, traditional media must resist the temptation to follow the path paved by social media in the pursuit of engagement. Upholding journalistic standards and commitment to truth is more crucial than ever. In the face of a misinformation and disinformation epidemic, it is imperative for established media to serve as a beacon of reliability, separating fact from fiction and reaffirming their role as guardians of an informed and discerning public.

The recent controversy surrounding Australian news channel Nine, which came under intense scrutiny for doctoring an image of Georgie Purcell, a Victoria state MP, by enlarging her breasts and exposing her midriff, is a glaring example of the dangers of traditional media chasing clickbait rather than focusing on substance. The Nine network apologised for the incident and claims the manipulation of the image happened inadvertently while using Photoshop through an AI automation and involved no human intervention, a claim which Purcell did not believe.

Adobe, the maker of the Photoshop software, has also added to the controversy by saying such a change would require the involvement of a human operator.

If so, this appalling act not only objectifies and demeans her but also highlights a disconcerting trend where journalistic integrity is sacrificed for sensationalism. Such actions not only perpetuate harmful stereotypes but also set a dangerous precedent, undermining the principles of a free, fair and honest press. As a society, we must reflect on the implications of such practices, recognising that the distortion of images in the media not only infringes upon the dignity of individuals but also poses a severe threat to the very foundations of truth and freedom of expression that responsible journalism seeks to uphold.

The controversy should prompt a broader conversation about the increasing reliance on technology within news outlets and the impact it may have on trusted news outlets. The incident serves as a stark reminder that while automation can streamline processes, its unchecked application in the realm of journalism poses serious ethical concerns. As media organisations integrate advanced technologies, there must be a concerted effort to ensure that the human touch, critical thinking, and ethical considerations remain at the forefront. The episode with Nine News underscores the importance of maintaining a balance between technological advancements and the principles that underpin responsible journalism—lest we risk compromising the very essence of truth, integrity, and freedom of expression that society relies upon for a well-informed and democratic discourse.

Should we worry that ChatGPT can write my blog?

There are two letters which dominate national conversations at the moment. AI. What is it, what can it do now, what will it be able to do and what does that mean for society and how will it affect people’s real lives? Both for good and ill. Everyone has a different view and our national governments have yet to negotiate an agreed, global ethical and industrial standard.

Obviously for Index our concerns focus solely on the potential impact of AI on speech and freedom of expression. Algorithms can amplify speech or de-list speech. No one yet knows what the impact of AI will be on speech but we fear how tyrants and despots may seek to manipulate national narratives and to manage propaganda channels both at home and abroad.

Given the wide-ranging conversations I thought the best starting point would be to ask the most accessible AI platform what they thought was going to happen next and this was the response from ChatGPT.

As artificial intelligence (AI) continues to advance at an unprecedented pace, its potential implications for freedom of expression cannot be ignored. While AI offers various benefits, there are growing concerns that it may inadvertently amplify censorship and restrict the fundamental right to express oneself. In this blog, we explore how AI poses a threat to freedom of expression and highlights the potential increase in censorship.

Automated content moderation

AI-driven content moderation systems are increasingly employed by social media platforms to filter and remove objectionable or illegal content. However, AI algorithms can be prone to biases, leading to inadvertent censorship. Without careful design and continuous monitoring, these systems may disproportionately target certain groups or viewpoints, limiting diverse opinions and stifling open dialogue. The over-reliance on AI for content moderation also raises concerns about lack of transparency and accountability in decision-making processes.

Algorithmic filtering

AI algorithms play a pivotal role in curating and personalising online content. While this can enhance user experiences, it also poses a risk to freedom of expression. Algorithmic filtering algorithms have the potential to create “filter bubbles” where individuals are only exposed to information that aligns with their existing beliefs. This can reinforce echo chambers, limit exposure to diverse perspectives, and impede critical thinking. By shaping the information users consume, AI-powered algorithms inadvertently control the flow of information and hinder the free exchange of ideas.

Deepfakes and disinformation

Advancements in AI have facilitated the creation of highly realistic deepfake videos and manipulated images, raising concerns about the spread of disinformation. Deepfakes can be exploited to manipulate public opinion, erode trust in authentic media, and create chaos. In response, governments and platforms might resort to increased censorship to combat this threat. While some censorship may be necessary to prevent harm, striking the right balance is crucial to avoid infringing upon individuals’ rights to express themselves freely.

In conclusion while AI offers immense potential for innovation and progress, it also poses threats to freedom of expression and could potentially increase censorship. It is essential to address these challenges by fostering transparency, accountability, and inclusivity in AI systems. Striking the right balance between AI’s capabilities and preserving fundamental rights is crucial for safeguarding freedom of expression in an AI-driven world.

After using ChatGPT for the first time to generate this blog – I now have two very different concerns. One, will I ever need to actually sit down and write a blog again?  And two, even according to AI itself, we may have a lot of work ahead to protect freedom of expression in an AI-controlled environment.

As ever there is work to be done and Index will be at the frontline to protect our collective fundamental right of freedom of expression.

[The image of our CEO Ruth Anderson writing a blog post with the help of artifical intelligence at the top of this article was also generated by AI. Credit: Bing.]