NEWS

The ethics of AI-generated content and who (or what) is responsible
Index explores the world of Hitler worship, social harms and the welfare of AI assistants
06 Nov 25

Some argue that the curation of information through AI could limit freedom of thought for humans. Image: GrandeDuc

This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.

“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.

Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.

The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.

Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.

“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.

“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”

This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.

He believes this could have significant implications for freedom of thought and freedom of expression.

“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”

The case for free speech

As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.

The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.

The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.

Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.

The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.

There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.

“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”

AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”

Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.

Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.

“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”

For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.

“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”

AI rights or human rights?

The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.

Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.

Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.

“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”

For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.

Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”

Liability in the spotlight

Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.

The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.

This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.

She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.

“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.

She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.

The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.

Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”

Legal loopholes

As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.

The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.

The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.

Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.

The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.

Legislate or innovate

Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.

In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.

Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.

“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”

The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.

“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.

“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”

As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.

“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.

Support free expression for all

 

At Index on Censorship, we believe everyone deserves the right to speak freely, challenge power and share ideas without fear. In a world where governments tighten control and algorithms distort the truth, defending those rights is more urgent than ever.

But free speech is not free. Instead we rely on readers like you to keep our journalism independent, our advocacy sharp and our support for writers, artists and dissidents strong.

If you believe in a future where voices aren’t silenced, help us protect it.

Make a £10 monthly donation

At Index on Censorship, we believe everyone deserves the right to speak freely, challenge power and share ideas without fear. In a world where governments tighten control and algorithms distort the truth, defending those rights is more urgent than ever.

But free speech is not free. Instead we rely on readers like you to keep our journalism independent, our advocacy sharp and our support for writers, artists and dissidents strong.

If you believe in a future where voices aren’t silenced, help us protect it.

Make a £20 monthly donation

At Index on Censorship, we believe everyone deserves the right to speak freely, challenge power and share ideas without fear. In a world where governments tighten control and algorithms distort the truth, defending those rights is more urgent than ever.

But free speech is not free. Instead we rely on readers like you to keep our journalism independent, our advocacy sharp and our support for writers, artists and dissidents strong.

If you believe in a future where voices aren’t silenced, help us protect it.

Make a £10 one-off donation

At Index on Censorship, we believe everyone deserves the right to speak freely, challenge power and share ideas without fear. In a world where governments tighten control and algorithms distort the truth, defending those rights is more urgent than ever.

But free speech is not free. Instead we rely on readers like you to keep our journalism independent, our advocacy sharp and our support for writers, artists and dissidents strong.

If you believe in a future where voices aren’t silenced, help us protect it.

Make a £20 one-off donation

At Index on Censorship, we believe everyone deserves the right to speak freely, challenge power and share ideas without fear. In a world where governments tighten control and algorithms distort the truth, defending those rights is more urgent than ever.

But free speech is not free. Instead we rely on readers like you to keep our journalism independent, our advocacy sharp and our support for writers, artists and dissidents strong.

If you believe in a future where voices aren’t silenced, help us protect it.

Donate a different amount

SUPPORT INDEX'S WORK