6 Nov 2025 | Americas, Digital rights, Europe and Central Asia, News, United Kingdom, United States, Volume 54.03 Autumn 2025
This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.
“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.
Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.
The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.
Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.
“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.
“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”
This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.
He believes this could have significant implications for freedom of thought and freedom of expression.
“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”
The case for free speech
As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.
The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.
The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.
Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.
The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.
There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.
“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”
AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”
Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.
Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.
“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”
For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.
“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”
AI rights or human rights?
The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.
Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.
Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.
“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”
For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.
Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”
Liability in the spotlight
Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.
The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.
This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.
She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.
“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.
She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.
The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.
Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”
Legal loopholes
As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.
The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.
The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.
Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.
The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.
Legislate or innovate
Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.
In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.
Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.
“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”
The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.
“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.
“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”
As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.
“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.
30 Jul 2025 | Africa, News, Zimbabwe
Zimbabwe’s brutal regime, under President Emmerson Mnangagwa, is using social media, particularly X, to smear and silence mostly female anti-government political activists and human rights defenders in the country.
President Mnangagwa’s army of paid pro-government social media trolls is known as the Varakashi —propaganda stormtroopers—with some using names of prominent people to open fake X accounts without their knowledge. One ghost X account uses the name of Zimbabwe’s former vice president, Joice Mujuru. Even President Mnangagwa’s spokesperson, George Charamba—who is also a senior government employee —runs two toxic ghost X accounts—@Jamwanda2 and @dhonzamusoro007—which he uses to attack and post completely fabricated and malicious information about female human rights defenders and political activists in Zimbabwe. The first of these accounts was suspended in 2022 but was reinstated after Elon Musk acquired Twitter and renamed it X.
In the past year there has been a proliferation of toxic X accounts in the country and they are flourishing. At times, these X accounts incite physical and sexual violence against female political and human rights activists. In one post, a ghost X account threatened a prominent human rights activist that “[I’m] waiting to rape you.” The post drew outrage from X users, and it was later deleted.
And in a study published in 2023, Constance Kasiyamhuru from the University of Johannesburg in South Africa said the Varakashi in Zimbabwe operate mostly on Twitter/X to “shut down” the political opponents of the governing Zanu PF party.
“Through trolling, name-calling, threats, mocking, mobbing, labelling, ridicule, casting aspersions, delegitimation, disinforming, and other strategies, Varakashi seek to regulate, censure, and ‘discipline’ anti-musangano [anti-ruling party] online discourse,” Kasiyamhuru wrote.
Tendai Ruben Mbofana, a Zimbabwe based social justice advocate and writer, said the systematic deployment of online trolls—particularly targeting female human rights defenders and political activists—has become a chilling hallmark of repression in Zimbabwe.
“These smear campaigns are not just personal attacks; they are part of a broader strategy to delegitimise our work, intimidate us into silence, and discredit our credibility in the eyes of the public,” Mbofana said.
He added that the abuse often takes on a deeply misogynistic tone, laced with gendered insults, threats of sexual violence, and false accusations designed to shame and isolate women.
“It creates a climate of fear and forces many women out of digital spaces that should otherwise be used to amplify their voices and advocacy,” he said.
Sophia Gwasira, who was elected as the first female mayor for Mutare City in eastern Zimbabwe in August 2023, told Index on Censorship that the fear of being smeared and attacked on social media platforms by Zanu PF social media trolls was forcing many women to abandon opposition politics and activism. She said social media platforms were no longer safe places for women in opposition politics in Zimbabwe, with the attackes affecting both them and their families.
“It’s affecting us not only physically but emotionally too. We are trying to find ways of countering these attacks. But currently we don’t have any protection from our own political parties or from the government,” Gwasira said.
But Gwasira said she will continue to fight for the people and, given the opportunity, she would contest the general elections slated for 2028. Gwasira and many other opposition mayors, MPs and councillors were recalled in late 2023 after her party, the Citizens Coalition for Change (CCC) was hijacked by President Mnangagwa’s ruling party Zanu PF through its proxy, Sengezo Tshabangu. This forced the CCC leader Nelson Chamisa to abandon the opposition party and he took a sabbatical from party politics in January 2024.
Promise Mkwananzi, spokesperson for opposition party Citizens Coalition for Change—which is still loyal to former leader Nelson Chamisa— told Index on Censorship that as opposition, they have been identifying and exposing some of these social media ghost accounts and to direct their members to counter the toxic narratives on X.
“It must be noted also that these trolls are paid using taxpayers’ money to denigrate women and bully voices of the alternative on social media,” Mkwananzi said.
But Mkwananzi was quick to add that his party will continue to fight and mobilise people for a better Zimbabwe.
“We are also educating our members to be strong and to remain focused on recruiting mobilising, educating and radicalising the base.”
Although women are the main target, men critical of the ruling party are also targeted.
“In my own experience, I have faced repeated, coordinated attacks on X, particularly from anonymous accounts believed to be run or supported by high-ranking government officials, including the president’s spokesperson. These attacks are aimed at silencing dissent and discouraging public engagement. But we will not be silenced. If anything, these attacks only reinforce the urgency of our work,” said Tendai Ruben Mbofana Mbofana.
When President Mnangagwa seized power through a military coup from Zimbabwe’s long-time dictator, Robert Mugabe in 2017, President Mnangagwa promised sweeping reforms; economic and political reforms, including upholding human rights and rule of law in the country.
However, Zimbabwe has become worse under President Mnangagwa than Mugabe; political opponents to Zanu PF have been brutalised, tortured and killed and corruption is widespread.
A recent report by Human Rights Watch said authorities in Zimbabwe have continued to restrict civic space and the rights to freedom of expression, association, and peaceful assembly and the human rights, political and economic situation in Zimbabwe continues to deteriorate.
Under the current constitution, President Mnangagwa’s term of office—his second and last term—ends in 2028 but his party is now planning to amend the constitution to keep him in office till 2030. Meanwhile, Mnangagwa’s Varakashi are flooding social media with messages in support of the extension of his term and touting his “achievements” so far.
You may also wish to read:
7 May 2025 | Europe and Central Asia, Israel, Middle East and North Africa, News, Palestine, United Kingdom
In today’s world of hot takes and moral outrage, we all want clear answers – good, bad, right and wrong – and people we can easily rally behind or blast – villain, victim, hero, heretic. But the cases of Kneecap, Jonny Greenwood and Dudu Tassa have resisted such clarity, and they’ve forced us to reckon with an uncomfortable truth: freedom of expression, especially in moments of deep political pain and division, isn’t always neat, easy or even popular.
First a recap for those who might have missed the stories or got lost in the details:
At the end of April, Belfast band Kneecap came under fire following the circulation of videos in which the group appears to endorse political violence, declaring “The only good Tory is a dead Tory. Kill your local MP,” and another showing apparent support for Hezbollah and Hamas, both proscribed as terrorist organisations in the UK. Kneecap insists their remarks were taken out of context, that their tone was satirical and that they do not in fact support these groups. Nevertheless, they are under police investigation and have had several of their shows dropped, following political pressure from MPs including Kemi Badenoch, leader of the Conservative Party.
Meanwhile Jonny Greenwood, best known as a member of Radiohead, and his collaborator, Israeli musician Dudu Tassa, said this week that they were scheduled to perform two concerts in the UK in June. The events have since been cancelled due to serious and credible threats that made the performances unsafe. The cancellations followed calls from organisations aligned with the Boycott, Divestment and Sanctions (BDS) movement. Tassa and Greenwood had previously performed together in Tel Aviv in 2024 and Tassa had performed for the Israel Defense Forces (IDF) in Gaza at the end of 2023. The Palestinian Campaign for the Academic and Cultural Boycott of Israel, who have called them out in the past, criticised the planned UK concerts as a form of “artwashing genocide” and welcomed news of their cancellation.
Greenwood has denounced the cancellations as censorship, while prominent artists such as Massive Attack have rallied behind Kneecap, framing the backlash they faced as part of a broader attempt to suppress dissent.
These are not simple cases. In the case of Kneecap, their rhetoric was inflammatory and, in invoking violence against politicians, reckless – two MPs have been murdered in this country in recent years after all. Their potential valorisation of Hamas and Hezbollah was far from funny – these groups are guilty of grave human rights violations. Kneecap have tried to deflect attention from their actions by saying that they are not the story and that Gaza is, but people should be free to challenge them and their views. It’s reductionist to say that doing so is somehow taking the focus away from Gaza.
And yet irreverence, political provocation and even transgressive speech have long been cornerstones of artistic expression. Search bands with the word “kill” in their name or album title and you won’t walk away short on examples. Whether Kneecap’s comments were satire or poor judgment, a response in the form of a criminal investigation raises important questions about proportionality and the appropriate limits of state intervention. The European Court of Human Rights has made clear that criminal sanctions should be a last resort in speech cases, and indeed the UK’s legal structures place a high bar on what constitutes incitement. Have the members of Kneecap met this threshold? It’s hard to see that they have.
Likewise, while boycotts are a legitimate form of protest, and protest is an essential pillar of free expression, they too can become a vehicle for coercion. The Greenwood–Tassa concerts were not silenced by public disagreement but by threats credible enough to endanger the performers, venue staff and audiences. That is not protest, it is intimidation.
Cultural boycotts specifically have other free speech complications too: while they typically target authoritarian regimes with the intention of effecting positive change, they can silence the very voices that are most helpful to the cause. In 1975, Index surveyed artists on their views about boycotting Apartheid South Africa and the general response was that it would do more harm than good. “Governments would not go to such lengths to secure silence if they did not fear speech,” said one respondent. “It is better to light one candle than to curse the darkness,” said another.
The truth is neither of the current UK situations present a clean clash between good speech and bad. Instead, they sit in an uncomfortable space where moral outrage, political solidarity and artistic freedom collide. Kneecap’s defenders are right to argue that Gaza must remain in focus; they’re wrong to say that this exempts artists from accountability for everything they say. Conversely, critics of Israel and its supporters must be free to speak and protest, but not through threats that endanger lives or undermine the very democratic principles they claim to defend.
At Index, we believe in a broad and inclusive approach to free expression. The right to speak must extend even to those whose views we find offensive, provocative or politically inconvenient. While this does not mean freedom from criticism, it does mean freedom from coercion and violence.
No artist is entitled to a stage and venues shouldn’t be beholden to host certain acts if the situation changes. However, when access to platforms is denied because the views, or even the identity, of the artists are politically contentious, something essential is lost. It becomes harder for culture to serve as a space of honest confrontation and productive dialogue, and easier for fear and conformity to set the limits of what is permissible.
Ultimately, for freedom of expression to mean anything, it must apply to everyone, not just those with whom we agree. Ideas must be challenged, yes, and artists held accountable too, but never through threat and only through the justice system when a high bar has been met. Greenwood said he was sad that those supporting Kneecap’s “freedom of expression are the same ones most determined to restrict ours”. His words are a warning: if you cheer shutting down space for one group, don’t be alarmed when the space of those you want to hear is shut down too.