The ethics of AI-generated content and who (or what) is responsible

This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.

“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.

Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.

The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.

Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.

“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.

“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”

This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.

He believes this could have significant implications for freedom of thought and freedom of expression.

“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”

The case for free speech

As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.

The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.

The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.

Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.

The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.

There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.

“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”

AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”

Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.

Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.

“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”

For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.

“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”

AI rights or human rights?

The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.

Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.

Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.

“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”

For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.

Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”

Liability in the spotlight

Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.

The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.

This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.

She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.

“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.

She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.

The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.

Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”

Legal loopholes

As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.

The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.

The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.

Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.

The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.

Legislate or innovate

Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.

In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.

Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.

“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”

The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.

“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.

“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”

As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.

“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.

The dissident family challenging Slovakia’s Robert Fico

In November 2018 I was invited to Bratislava to attend the Central European Forum. The list of participants was impressive. Timothy Snyder, author of On Tyranny, and then only an emerging voice, was speaking. So was the extraordinary Belarusian writer and Nobel Prize winner Svetlana Alexievich. And also Edouard Louis, the young French literary sensation, who had just written a book about why his working-class father had turned to the far-right.

The event – organised annually since 2009 – in Bratislava and timed to coincide with the anniversary of the Velvet Revolution, was unashamedly intellectual. It had a mad chaos, where ideas were exciting. The fact that the theme was Demand the Impossible, the slogan of the 1968 Paris street protests made the occasion all the more exhilarating. The forum is the brainchild of Marta Šimečková, a small middle-aged lady in a large coat who appeared like a whirlwind in the foyer of our baroque hotel with her little rescue dog in tow. She has a talent for making everyone from the lowliest attendee to the most famous writer feel welcome and special.

There was hope in the air then in Slovakia. The Prime Minister, Robert Fico, had just stepped down after street protests triggered by the assassination of the investigative journalist Ján Kuciak and his fiancée.

Much has changed today. Fico is back in power – and has turned his fire on Šimečková. He has accused her of being a fraudster and “a parasite” by embezzling the public funds for her forums. She has replied in kind with a eviscerating open letter. But Fico’s intentions are clear: to close down the international discussions she is so brilliant at convening. There are other reasons for Fico’s attacks too.

Šimečková is no ordinary conference organiser. She is the daughter-in-law of the prominent Czech-Slovak dissident Milan Šimečka, whose writings Index published as samizdat during the Cold War. In an essay from 1981 intended as an introduction to George Orwell’s 1984, he writes how he feels that his story is the same as that of the anti-hero Winston and that his feelings and experience in communist Czechoslovakia mirror almost exactly those of the fictional character. A graphic biography, Comrade Dissident, has just been published in Slovak.

Šimečková’s husband is the writer and political commentator Martin, and their son Michal, is now Slovakia’s opposition leader. They are dissident royalty.

While Fico and his young hawks in government have been cosying up to Putin and pursuing pro-Russia politics, Michal Šimečka has taken the Ukrainian side, with tens of thousands joining rallies earlier this year to protest the official pro-Russian line. Fico’s government has gone further, adopting legislation which tightens the rules for nongovernmental organisations, a move critics say resembles Kremlin-style laws. Fico’s government has also attacked the independent media and rolled back LGBTQ rights, for which Michal is a champion. So the attack on Šimečková, is also an attack on Michal Šimečka, Fico’s main political opponent and an attempt to discredit the whole family.

This November Šimečková remains defiant. The Central European Forum will go ahead as usual on 16 and 17 November in Bratislava with prominent figures in Western literature and liberal political thought invited.

But for the first time since 2001, the anniversary of the Velvet Revolution on 17th isn’t a public holiday. Fico, who has always claimed the date wasn’t significant and that he was retiling his bathroom at the time, abolished it this year as part of his austerity measures.

New report: Urgent reform needed on media freedom in Bulgaria

A new report published by the partner organisations of the Council of Europe’s Safety of Journalists Platform and the Media Freedom Rapid Response (MFRR) examines media freedom in Bulgaria. The findings make for depressing reading. Although partners say there has been some progress, the landscape in which journalists operate “remains characterised by the corrosive influence of political and economic interests over editorial independence and media pluralism”.

The mission to Sofia, on which the report is based, took place between 24 and 26 September 2025. Index CEO Jemimah Steinfeld was present as were representatives from Article 19 Europe; Association of European Journalists (AEJ); European Broadcasting Union (EBU); European Centre for Press and Media Freedom (ECPMF); European Federation of Journalists (EFJ); International Press Institute (IPI); Reporters Without Borders (RSF); Osservatorio Balcani e Caucaso Transeuropa (OBCT). The local partner was the Association of European Journalists Bulgaria.

We are reprinting the executive summary below and a copy of the full report can be found here.

Executive summary

While Bulgaria has experienced modest progress on media freedom in the last four years, the situation remains undermined by persistent structural, legal and political challenges, with urgent action needed by government and public authorities to push forward both domestic and EU-mandated reforms.

Deep political polarisation continues to shape the media environment, fuelling hostility toward journalists and obstructing consensus on key developments. However, a window of opportunity exists to consolidate recent gains and implement long-overdue changes.

Despite the recent progress, Bulgaria continues to suffer from one of the lowest levels of media freedom in the European Union, according to both the World Press Freedom Index and the Media Pluralism Monitor.

To solidify these gains, measures are needed to prevent and prosecute attacks on journalists, resolve the ongoing dispute over the leadership of the public broadcaster, guarantee the independence of the Council for Electronic Media, pass and effectively implement anti-SLAPP legislation to curb vexatious lawsuits against journalism.

Verbal attacks by politicians remain common, while trust in law enforcement is low and investigations into attacks are often slow. No system exists to track such cases. Bulgaria has not yet nominated a national focal point or engaged actively in implementing the Council of Europe’s Journalists Matter campaign. Threats from organised crime persist and concerns remain over reports of the use and hosting of digital surveillance technologies in Bulgaria.

The recent approval and then withdrawal of controversial amendments to the penal code to introduce fines and prison sentences of up to six years for disseminating personal information about an individual without their consent would have, if approved, seriously undermined media freedom and risked the imprisonment of journalists carrying out public interest reporting.

Overall, the country’s media landscape remains characterised by the corrosive influence of political and economic interests over editorial independence and media pluralism, resulting in persistent media capture challenges.

Key issues include opaque media ownership, non-transparent distribution of state advertising, and weak protections against interference and pressure on independent journalism, all of which are contributing to low levels of public trust in media.

Economic pressures on Bulgarian media are exacerbated by the technological challenges posed by digital platforms and AI generative models, both of which threaten their revenues and business models.

Continued uncertainty over the management of Bulgarian National Television (BNT), the repeated inability of the CEM to reach a majority vote in selecting a new Director General, as well as ongoing appeals and legal battles over the appointment process, reflect Bulgaria’s broader media governance challenges, including politicised regulatory bodies and the fragile independence of public broadcasting. The ongoing deadlock and drawn out legal disputes are undermining the trust in both institutions.

If effectively implemented, the EMFA, in full force since August 2025, offers potential remedies to this and many of the other structural challenges that continue to affect the Bulgarian media landscape.  However, the authorities’ preparedness for alignment with the EMFA remains low. While the Ministry of Culture confirmed to the mission that a new working group has been formed to implement EMFA reforms to the Radio and Television Act (RTA), no information was provided about plans for wider implementation of any other Articles of EMFA and the timeline for additional reforms remains unclear.

It remains unclear what the difference between the two working groups is and how much the previous strategies would be followed or not.

To push forward reforms, media professionals must unite with journalistic associations, unions and other representative bodies to strengthen solidarity and cooperation within the journalistic profession, to monitor progress, document violations and push for better working conditions for the industry.

Breaking this legislative inertia will require cross-party support and a shared understanding of the role that a free and independent media play in democracy. Any marginal advancement of reform in Bulgaria must be accompanied by a shift in political culture which views critical and watchdog journalism as a core pillar of the country’s democratic fabric that requires attention and additional safeguards.

The week in free expression 24 October – 31 October

Bombarded with news from all angles every day, important stories can easily pass us by. To help you cut through the noise, every Friday Index publishes a weekly news roundup of some of the key stories covering censorship and free expression. This week, we look at the deployment of the military against protesters in Tanzania and a rock band playing on the streets of Tehran.

Tanzania: Military deployed and curfews enacted

Protests have erupted in Tanzania following a disputed election and the deployment of the military to enforce a curfew across Dar es Salaam.

President Samia Suluhu Hassan, incumbent leader of the ruling Chama Cha Mapinduzi (CCM) party and one of only two female leaders in Africa, won 78.8% of the vote. Her victory has been disputed, especially as candidates from the top two opposition parties had been disqualified from running in the election. CCM has been in control of Tanzania since it gained independence in 1961.

Index covered the lead-up to the election, and reported on fears that the opposition leaders are being silenced.

Internet watchdog Netblocks has reported an internet blackout across the country following the beginning of the curfew.

Opposition leader Tundu Lissu has been imprisoned on charges of treason for his calls for electoral reform, whilst Luhaga Mpina, who leads the second largest opposition party was barred from taking part in the election.

Protests are still ongoing as demonstrators reject the election results.

Iran: Rock band shows sparks of rebellion

In a startling video we saw this week, Iranians have taken to the streets of Tehran to watch a rock band playing the White Stripes 2003 single Seven Nation Army.

The video has been widely shared, and shows women dancing in the street without wearing head coverings in a display of freedom on the streets.

Index has been following the clampdowns on musicians in Iran over the last few years.

This display follows a growing movement of defiance against mandatory hijab requirements across Iran that has built steadily since the 2022 Woman, Life, Freedom protests sparked by the murder of Mahsa Jina Amini.

Nigeria: Visa rejection for Nobel winner

91-year-old Nobel Laureate Wole Soyinka announced on Tuesday that the USA had revoked his non-resident visa.

Soyinka won the 1986 Nobel Prize in Literature and is widely known for his work as a playwright and poet. He previously renounced his permanent US residency in protest at the election of President Donald Trump.

Soyinka said: “I was given a date to report to their consulate with my passport. I declined the invitation. First of all, I didn’t like the date. Everybody knows what happened on that date, 9/11, many years ago, so it is rather unfortunate that they picked that date. So I said, ‘Sorry, I’m superstitious; I’m not coming on that day.’ And ultimately, I made it clear I was not going to apply for another date to bring in my passport. So I travelled out.

“When I came back – even before I came back –  I got a letter from the ambassador.

“So we arranged a call, and I explained. Again, he offered a special visit by me at the consulate, and they would ask a few questions about the possible facts that existed that they didn’t know about when this visa was issued. We spoke, and I said, ‘Shall I be equally frank with you? I’m not interested.’”

Visa rules changed for Nigerians in July, with non-immigrant visas now receiving single-entry three-month permits as opposed to the up to five-year multiple-entry visas available previously. 

USA: Gamers wanted for ICE 

The White House has continued its use of memes in its effort to recruit members for the United States Immigration and Customs Enforcement (ICE).

Recent posts by the Department of Homeland Security on X have used screenshots from the Halo video game series overlaid with phrases such as “stop the flood” equating the undocumented migrants ICE targets with the alien enemies faced in the video game. 

The posts come in the wake of numerous controversies facing the organisation, with multiple shootings reported this week in relation to ICE, and a British national being held by the organisation after the US had revoked his visa.

Similarly AI has become a mainstay. We’ve covered the trend in the latest edition of our magazine – and that was before a post from President Trump on his Truth Social platform last week imagining him flying a fighter jet over protesters covering them in excrement.

Memes and online culture have been a mainstay of the Donald Trump presidencies since he first took power in 2016, with an online culture developing around his campaign on sites such as 4chan

Vietnam: BBC journalist trapped

A journalist from the BBC has been blocked from leaving Vietnam according to a statement released by the broadcaster.

The Vietnamese national has not been named, however the BBC released the following statement: “One of our journalists has been unable to leave Vietnam for several months as the authorities have withheld their ID card and their renewed passport.

“During this time our journalist was subject to multiple days of questioning by the authorities. The BBC journalist was in Vietnam for a routine passport renewal and to visit family.

“We are deeply concerned about our journalist’s wellbeing and urge the authorities to allow them to leave immediately, providing them with their renewed passport so they can return to work.”

UK Prime Minister Keir Starmer reportedly raised the issue with general secretary of the Communist Party of Vietnam Tô Lâm during a state visit to the UK this week, however no updates have been made on the status of the journalist.

SUPPORT INDEX'S WORK