Rwanda polls: The Kagame “landslide” that would embarrass other dictators

Rwanda’s Paul Kagame has won 99.15% of the vote in this week’s presidential poll, a margin of victory so high that even Belarusian dictator Aliaksandr Lukashenka has baulked at claiming such a high figure in his own rigged elections.

Rather than being a vote of confidence, the main reason for Kagame’s overwhelming “victory” was the lack of opposition.

Last month, Rwanda’s opposition leader, Diana Rwigara said that after all the time, work and effort she had put in, she was very disappointed she had been barred from contesting the country’s 15 July presidential election.

Rwigara also failed to run in the 2017 presidential poll as she was charged with inciting insurrection, an accusation levelled years earlier against her late father Assinapol Rwigara, before he died in a suspicious car accident which his family says was an assassination.

“@PaulKagame why won’t you let me run? This is the second time you cheat me out of my right to campaign,” Rwigara posted on social media platform X formerly known as Twitter.

Following Rwanda’s elections on Monday, Kagame’s only opposition were two little known candidates. Rwigara and five others including two of his major challengers, Victoire Ingabire and Bernard Ntaganda, had been barred from the ballot.

Rwanda’s National Electoral Commission said the incumbent got 99.15% of the vote.

Kagame, who was key to a scheme to process asylum seekers arriving in the UK “illegally” – an initiative now scrapped by the new Labour government – has been in power since the end of the country’s genocide in 1994.

Countless critics have been jailed or killed since then. One of them, Hotel Rwanda hero Paul Rusesabagina, known for sheltering hundreds of people during the genocide was jailed for supporting terrorism in 2021 after being arrested while he was travelling internationally. His sentence was later commuted, and he was allowed back to the USA.

In an interview with Index, Jeffrey Smith, the executive director of Vanguard Africa, a non-profit group that works with activists to support free and fair elections, said the election outcome does not reflect the will of the people.

“In Rwanda, there is freedom of expression. But that freedom is limited to expressing support for Paul Kagame and his ruling party — whether coerced or otherwise,” he said.

“These latest so-called ‘election’ results — a sort of performative art perfected by Kigali — clearly establish the Kagame dictatorship as among the most effective and effectively brutal police states of the 21st century.”

It’s a view shared by Ingabire, one of the opposition leaders who was barred from both the 2017 and 2024 elections. Ingabire published an opinion article in May in which she said Rwanda’s election will entrench the persistent suppression of opposition voices.

In January 2010, after 16 years in exile, Ingabire returned to Rwanda with the intention to register her political party and run for president but was arrested and jailed for 15 years. Kagame later pardoned her after international pressure but she has been prevented from leaving the country. There is an international campaign: #CallKagameforVictoire for heads of government around the world to ask Kagame to end Ingabire’s persecution.

“My trial, marred by irregularities and a lack of minimum fair trial standards, ended with a harsh sentence for crimes including ‘genocide ideology’, a controversial offence that has been used to silence dissent,” Ingabire wrote in her opinion article.

Ingabire said the African Court on Human and Peoples’ Rights later ruled in 2017 that her rights to freedom of opinion and expression had been violated, a verdict she says which highlights the broader issues of legal restrictions on speech and the challenges faced by political opposition in Rwanda.

Ingabire said many international observers see Rwanda as an exemplary country as it adeptly orchestrates communication campaigns and disseminates compelling narratives globally showcasing its purported capability to address both domestic and international challenges that include counterterrorism.

The country has also deployed Rwandan soldiers in multinational peacekeeping missions.

“This carefully crafted public image is not reflective of reality,” she said.

In Professor Nic Cheeseman’s book How to Rig an Election, he quotes President Aliaksandr Lukashenka of Belarus saying he ordered his 93 percent 2006 election victory to be changed to around 80 percent because more than 90 percent would not be psychologically well received.

The author, who is professor of democracy and international development at the University of Birmingham, told Index in an interview that Kagame’s margin of victory speaks volumes.

“If Lukashenka, the last dictator of Europe, thinks that winning more than 90% in an election is psychologically implausible, it is pretty clear that 99% tells us as much about President Kagame’s desire for absolute control as it does the wishes of the Rwandan people,” said Professor Cheeseman.

According to Freedom House’s Freedom in the World 2024 report, Kagame’s government has suppressed political dissent through pervasive surveillance, intimidation, arbitrary detention, torture, and renditions or suspected assassinations of exiled dissidents.

“The practical space for free private discussion is limited in part by indications that the government monitors personal communications. Social media are heavily monitored, and the law allows for government hacking of telecommunications networks. Rwandan authorities reportedly use informants to infiltrate civil society, further discouraging citizens from voicing dissent. Individuals have been forcibly disappeared, arrested, detained, and assassinated for expressing their views,” said the report.

Apart from autocratic rule at home, last week the head of the United Nations Organisation Stabilisation Mission in the Democratic Republic of Congo (MONUSCO), accused Rwanda of supporting 23 March Movement (M23) rebels that are committing atrocities in the neighbouring country.

In January, Burundi rebels closed its borders with Rwanda after accusing its neighbour of funding rebel attacks.

Report pinpoints role of likely Russian troll networks in European election disinformation

A network of accounts flooded social media with disinformation in the run-up to the European Parliamentary elections a new report has found.

The report was commissioned by the Social Democrats in the European Parliament (S&D) grouping together with the Dutch delegation GroenLinks-PvdA and produced by disinformation specialisists Trollrensics.

It reveals that organised networks of thousands of accounts, which the researchers believe are of likely Russian origin, actively influenced public opinion on X in France and Germany during the elections while voters in the Netherlands, Italy and the English-speaking public were also affected by the troll networks

Trollrensics’ data analysis showed that at least 20% of all tweets about the French far-right politician Zemour came from this troll network, for example. However, the research company estimated the actual percentage is significantly higher as the networks manipulated the X algorithm to amplify specific themes.

The research also found that German political party AfD received a huge boost thanks to the troll army. At least 10.7% of the tweets about the AfD came from the disinformation network.

The network focused mainly on spreading pro-Russian propaganda, messages about anti-vaxxers with anti-vaccination narratives and anti-LGBTIQ+ messages.

Thijs Reuten, an MEP for the S&D, said, “We commissioned this independent study as we were curious about the extent of online foreign interference and how measurable it is – especially because this sometimes seems so hard to ascertain. This study has shown that significant influence took place during the European elections. Troll armies managed to make topics trend and at the same time make certain news reports less visible.”

Reuten added, “This clearly shows our democracy is vulnerable and that foreign powers are willing to spend a lot of money and effort to sow division in our population. We need to defend ourselves better against such organised attempts of foreign interference. I expect the European Commission and the intelligence services to be on top of this. Our open society is in danger if troll armies are able to manipulate social media and, therefore, the public debate”.

The report confirms concerns from European groups that large-scale troll networks from Russia were attempting to influence the outcome of the elections.

Has Russian disinformation caused Europe’s lurch to the right?

While the outcome of the 2024 election is yet to be finalised, results at the time of writing show that Eurosceptic conservatives are on course to win an extra 14 seats (taking them to 83), while right-wing nationalists will gain nine seats (to 58). Overall, the right, including centre-right politicians of the European People’s Party grouping, has done well, largely at the expense of the liberal and green party groupings. With just five nations out of 27, including Italy and Estonia, remaining to publish their final results, the overall picture is unlikely to change dramatically.

The move to the far right is evident across Europe. France, which elects 81 members to the European Parliament (EP), was perhaps where this was most evident. Marine Le Pen’s far-right National Rally party is projected to receive around 31-32% of the vote, against President Macron’s centrist party, which is estimated to reach around 15% of the vote. Macron was so concerned about his party’s poor showing that he has called an election in the country. Belgium’s prime minister also handed in his resignation after the nationalist New Flemish Alliance emerged as the big winner after regional, national and European Parliament elections were held in the country on Super Sunday.

In Germany, Eurosceptic parties are projected to secure over 16% of the EP vote. The AfD tripled its support from voters under 24 from 5% in 2019 to 16% and gains six seats to reach 15. The Greens lost nine seats from 21 last time around. Austria’s far-right Freedom Party gained nearly 26% of the vote, gaining three seats, while in the Netherlands, Geert Wilders’s anti-immigration Party for Freedom gained six seats with 17% of the vote. A similar story played out in Poland, Spain, Greece, Bulgaria and Croatia.

But what is driving Europe’s veer to the right?

There is some evidence that the success of the far right comes from millennial and Gen Z voters shifting towards these parties. A third of French voters under 34 and 22% of young German voters favour their country’s far right, while in the Netherlands, the Party for Freedom has become the largest party among under-34s.

Young Europeans, mainly those aged 18-29, overwhelmingly rely on social media for daily news consumption. In Italy and Denmark, nearly three-quarters of young adults use social media for news daily (74% and 75%). A recent German youth study found that 57% of youth prefer social media for news and political updates.

There is growing concern that external actors, particularly from Russia, may have influenced the elections.

Media reports reveal that EU leaders were so concerned about foreign interference in the elections that they set up rapid alert teams to manage any serious incidents. Officials told the Guardian that disinformation has reached “tsunami levels.”

The evidence points to Russia.

Last December, France’s VIGINUM group, which is tasked with protecting France and its interests against foreign digital interference, published a report revealing a network of nearly 200 websites with addresses of the form pravda-xx.com or xx.news-pravda.com, where xx is the country identifier.

The sites, which generate little new content themselves, instead amplify existing pro-Russian content from state sources and social media, including posts from military blogger Mikhail Zvinchuk. Pro-Russian content relating to the Ukraine war is a particular favourite.

Thirty-four fact-checking organisations in Europe, showed that the Pravda network had spread to at least 19 EU countries. Fact-checking organisation Greece Fact Check, in cooperation with Pagella Politica and Facta news, has since noticed that the Pravda network has been attempting to convey large amounts of disinformation and pro-Russia propaganda to sway EU public opinion.

The organisation said that “minor pro-Russian politicians who run for the elections are quoted by state media such as Ria and then further amplified by the Pravda network, in what seems an attempt to magnify their relevance”.

A report by EDMO on EU-related disinformation ahead of the elections found that it was at its highest ever level in May 2024. Ministers for European affairs from France, Germany, and Poland cautioned about efforts to manipulate information and mislead voters. Across the EU, authorities observed a resurgence in coordinated operations spreading anti-EU and Ukraine narratives through fake news websites and on social media platforms Facebook and X.

Among the false stories that emerged and covered were reports that EU President Ursula Von der Leyen had links to Nazism and had been arrested in the European Parliament.

In Germany, there were stories circulating that the country’s vote was being manipulated, ballot papers with holes or corners cut were invalid and that anyone voting for the far-right party AfD would follow stricter rules. Other stories attempted to trick voters into multiple voting or signing their ballot papers, practices that would invalidate their votes.

The report also noted that around 4% of such disinformation articles have been created using AI tools.

The tsunami of disinformation looks unlikely to fade away any time soon. The Guardian says that the EU’s rapid alert teams have been asked to continue their work for weeks after the election.

A senior official told the paper, “The expectation is that it is around election day that we will see this interruption of narratives questioning the legitimacy of the European elections, and in the weeks around it.”

How artificial intelligence is influencing elections in India

It has less than six months since Divyendra Singh Jadoun, the 31-year-old founder of an artificial intelligence (AI) powered synthetic media company, started making content for political parties in India. Within this short time he has risen to be known as the “Indian Deepfaker” as several political parties across the ideological spectrum reach out to him for digital campaigning.

Jadoun’s meteoric rise has a lot to do with the fact that close to a billion people are voting in India’s elections, the longest and largest in the world, which started last month. He says he doesn’t know of a single political party that hasn’t sought him out to enhance their outreach. “They [political parties] don’t reach out to us directly, though. Their PR agencies and political consultants ask us to make content for them,” said Jadoun, who runs the AI firm Polymath, based in a small town known for its temples in the north Indian state of Rajasthan and which has nine employees.

In India’s fiercely divided election landscape, AI has emerged as a newfound fascination, particularly as the right-wing ruling Bharatiya Janata Party (BJP) vies for an unusual third consecutive term. The apprehension surrounding technology’s capabilities in a nation plagued by misinformation has raised concerns among experts.

Jadoun says his team has been asked many times to produce content which they find highly unethical. He has been asked to fabricate audio recordings that show rival candidates making embarrassing mistakes during their speeches or to overlay opponents’ faces onto explicit images.

“A lot of the content political parties or their agents ask us to make is on these lines, so we have to say no to a lot of work,” Jadoun told Index on Censorship.

Certain campaign teams have even sought subpar counterfeit videos from Jadoun, featuring their own candidate, which they intend to deploy to discredit any potentially damaging authentic footage that surfaces during the election period.

“We refuse all such requests. But I am not sure if every agency will have such filters, so we do see a lot of misuse of technology in these elections,” he says.

“What we offer is simply replacing the traditional methods of campaigning by using AI. For example, if a leader wants to shoot a video to reach out to each and every one of his party members, it will take a lot of time. So we use some parts of deep-fakes to create personalised messages for their party members or cadres,” Jadoun adds.

Pervasive use

India’s elections are deeply polarised and the ruling right-wing BJP has employed a vicious anti-minority campaign to win over the majority Hindu voters- who roughly form 80% of the electorate. The surge in use of AI reflects both its potential and the concerns, amidst widespread misinformation. A survey by cybersecurity firm McAfee, taken last year, found that over 75% of Indian internet users have encountered various types of deepfake content while online.

Some of the most disturbing content features various dead politicians have been resurrected through AI to sway voters. Earlier this year, regional All India Anna Dravida Munnetra Kazhagam Party’s (AIADMK) official account shared an audio clip featuring a virtual rendition of Jayalalithaa, a revered Tamil political figure who passed away in 2016. In the speech, her AI avatar aimed to inspire young party members, advocating for the party’s return to power and endorsing current candidates for the 2024 general elections.

Jayalalithaa’s AI resurrection is not an isolated case.

In another instance, just four days prior to the start of India’s general election, a doctored video appeared on Instagram featuring the late Indian politician H Vasanthakumar. In the video, Vasanthakumar voices support for his son Vijay Vasanth, a sitting Member of Parliament who is contesting the election in his father’s erstwhile constituency.

The ruling Bharatiya Janata Party (BJP), known for its use of technology to polarise voters, has also shared a montage showcasing Prime Minister Modi’s accomplishments on its verified Instagram profile. The montage featured the synthesized voice of the late Indian singer Mahendra Kapoor, generated using AI.

Troll accounts subscribing to the ideology of different political parties are also employing AI and deepfakes to create narratives and counter-narratives. Bollywood star Ranveer Singh in a tweet last month cautioned his followers to be vigilant against deepfakes as a manipulated video circulated on social media platforms, where Singh appeared to criticise Modi. Using an AI-generated voice clone, the altered video falsely portrayed Singh lambasting Modi over issues of unemployment and inflation, and advocating for citizens to support the main opposition party, the Indian National Congress (INC). In reality, he had praised Modi in the original video.

“AI has permeated mainstream politics in India,” said Sanyukta Dharmadhikari – deputy editor of Logically Facts, who leads a team of seven members to fact-check misinformation in different vernacular languages.

Dharmadhikari says that countering disinformation or misinformation becomes extremely difficult in an election scenario as false information consistently spreads more rapidly than fact-checks, particularly when it aligns with a voter’s confirmation bias. “If you believe a certain politician is capable of a certain action, a deepfake portraying them in such a scenario can significantly hinder fact-checking efforts to dispel that misinformation,” she told Index on Censorship.

Selective curbs

Amidst growing concerns, the Indian government rushed to regulate AI by asking tech companies to obtain approval before releasing new tools, just a month before elections. This is a substantial shift from its earlier position when it informed Indian Parliament of not interfering in how AI is being used in the country. Critics argue that the move might be another attempt to selectively weigh down on opposition and limit freedom of expression. The Modi government has been widely accused of abusing central agencies to target the opposition while overlooking allegations involving its own leaders or that of its coalition partners.

“There needs to be a political will to effectively regulate AI, which seems amiss,” says Dharmadhikari. “Even though the Information Ministry at first seemed concerned at the misuse of deepfakes, but gradually we have seen they have expressed no concerns about their dissemination especially if something is helping [PM] Modi,” she added.

Chaitanya Rohilla, a lawyer based in Delhi, who initiated a Public Interest Litigation (PIL) at the Delhi High Court concerning the unregulated use of AI and deepfakes in the country believes that as technology unfolds at breakneck speed, the need for robust legal frameworks to safeguard against AI’s emerging threats is more pressing than ever.

“The government is saying that we are working on it…We are working on rules to bring about or to specifically target these deepfakes. But the problem is the pace at which the government is working, it is actually not in consonance with how the technology is changing,” Rohilla told Index on Censorship.

Rohilla’s PIL had requested the judiciary to restrict access to websites that produce deepfakes. The proposal suggested that such websites should be mandated to label AI-generated content and be prohibited from generating illicit material.

But Indian courts have refused to intervene.

“The information Technology Act that we have in our country is not suitable; it’s not competent to handle how dynamically the AI environment is changing. So as the system is unchecked and unregulated it (deepfake dissemination) would just keep on happening and happening.”