6 Nov 2025 | Americas, Europe and Central Asia, News and features, United Kingdom, United States, Volume 54.03 Autumn 2025
This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.
“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.
Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.
The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.
Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.
“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.
“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”
This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.
He believes this could have significant implications for freedom of thought and freedom of expression.
“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”
The case for free speech
As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.
The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.
The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.
Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.
The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.
There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.
“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”
AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”
Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.
Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.
“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”
For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.
“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”
AI rights or human rights?
The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.
Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.
Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.
“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”
For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.
Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”
Liability in the spotlight
Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.
The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.
This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.
She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.
“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.
She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.
The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.
Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”
Legal loopholes
As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.
The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.
The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.
Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.
The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.
Legislate or innovate
Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.
In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.
Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.
“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”
The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.
“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.
“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”
As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.
“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.
29 Oct 2025 | Americas, Asia and Pacific, News and features, South Korea, United States, Volume 54.03 Autumn 2025
This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.
"Don’t speak ill of the dead” is an aphorism that dates back centuries, but what if the dead speak ill of you? Over the past few years there has been a rise in the creation of chatbots trained on the social media and other data of the deceased. These griefbots are deepfakes designed to simulate the likeness and the personality of someone after their death, as though they have been brought back as ghosts.
The concept of the griefbot is not new. Our narratives around AI span centuries and the stories about creating an artificial version of a lost loved one can be found in Greek mythology: Laodameia, for example, distraught at losing her husband Protesilaus during the Battle of Troy, commissioned an exact likeness of him. (It did not end well: she was caught in bed with it. Her father, fearing she was prolonging her grief, burned the wax replica husband and Laodameia killed herself to be with Protesilaus.)
Further back, as US academic Alexis Elder has explored, there are precursors to griefbots in classical Chinese philosophy. The Confucian philosopher Xunzi, writing in the third century BCE, described a ritual where the deceased person was deliberately impersonated via a roleplay to allow loved ones the chance to engage with them once more.
These days, sci-fi likes to surface our contemporary fears and the TV shows have notable storylines warning of the pitfalls of resurrecting our loved ones via technology. In the 2013 Black Mirror episode Be Right Back, a grieving woman uses an AI service to talk with her recently deceased partner, desperate for communication that is ultimately doomed to be illusory.
Grief tech hit the headlines in 2020 when US rapper Kanye West gave his then-wife, Kim Kardashian, a birthday hologram of her dead father.
“Kanye got me the most thoughtful gift of a lifetime,” she wrote on social media. “It is so lifelike and we watched it over and over.”
West likely steered the script, which might’ve been obvious when the hologram told Kim she’d married “the most, most, most, most, most genius man in the whole world – Kanye West”.
While the broader public perception of ghostbots is often one of distaste and concern, those who have engaged with the digital echoes of a lost loved one have been surprisingly positive. When we lose someone we love, we do what we can to fix in place our concept of them. We remember and we memorialise: keepsakes and pictures, speaking their names and telling their stories. Having them with us again through technology is compelling. A Guardian newspaper article in 2023 reported users’ sense of comfort and closure at engaging with chatbots of their dead relatives.
“It’s like a friend bringing me comfort,” said one user.
With a potentially huge new market – grief is universal, after all – come the start-ups. Alongside general tools like ChatGPT are the dedicated software products. The US-based HereAfterAI, which bills itself as a ‘memory app’, allows users to record their thoughts, upload photos and grant access of their content to their loved ones. South Korean company DeepBrain AI claims it can build you an avatar of your dead loved one from just a single photo and a 10 second recording of their voice.
Current technology offers us the ‘could we?’, but what about the ‘should we’? In their 2023 paper, Governing Ghostbots, Edina Harbinja, Lilian Edwards and Marisa McVey flagged a very major problem: that of consent.
“In addition to the harms of emotional dependence, abusive communications and deception for commercial purposes, it is worth considering if there is potential harm to the deceased’s antemortem persona,” they wrote.
If we have some ownership of our data when alive, then should we have similar rights after our death? Creating an avatar of someone who is no longer around to approve it means we are literally putting words in someone’s mouth. Those words might be based on sentences they’ve typed and videos they’ve made but these have been mediated through machine learning, generating an approximation of an existence.
There is, of course, the potential that a desire for a sanitised reminder of the deceased means their words are only permitted to be palatable. Content moderation of AI chatbots might mean censorship or moderation – the same that applies to the large language models (LLMs) that drive them. Views could be watered down, and ideologies reconfigured. There is no true freedom of speech in the literal sense, and no objection available to the lack of it. The dead have no redress.
Conversely, what if posthumous avatars are built for political influence? In India in 2024, a deepfake avatar of a woman who had died more than a decade previously – the daughter of the founder of the Tamil Tigers – was shown in a video urging Tamils to fight for freedom. And in the USA, the parents of Joaquin Oliver, killed in a school shooting in Florida in 2018, created an AI version of their son to speak to journalists and members of Congress to push for gun reform. In both the India and USA cases, the griefbot technology did not exist when these people died and they would have had no way of knowing this could happen, let alone be able to consent to it.
Whether we like it or not, most of us will live on digitally when we die. Our presence is already out there in the form of data – all the social media we’ve ever posted, all the photos and videos of us online, our transaction history, our digital footprints. Right now, there is a lack of clear governance. Digital rights vary dramatically from jurisdiction to jurisdiction, and AI regulation is in its infancy. Only the EU and China currently have explicit AI legislation in place with moves afoot in other countries including the USA and UK, but not yet in statute. Amidst all of this, global tech companies get to set the agenda. For now, all we have is the hope that we can set our own personal boundaries for posthumous expression before our grief becomes someone else’s commodity.
9 Oct 2025 | Middle East and North Africa, News and features, Saudi Arabia
This is the final day of the Riyadh Comedy Festival so we thought we’d publish some jokes audiences probably won’t have heard during the last fortnight.
Index staff have used AI to imagine some gags from artificial facsimiles of stand-ups Bill Burr, Jimmy Carr, Jack Whitehall and Louis C.K.
We felt compelled to do this because we support those in Saudi Arabia whose voices are so often silenced and those who are currently in prison. Last week we published an article by Ghanem al-Masarir about how he was persecuted as a Saudi comedian, and we remember journalist Jamal Khashoggi who was murdered by the regime in the Saudi embassy in Turkey seven years ago on 2 October - a grim reminder of the stance the Saudi government takes against their critics.
For extra context, this is part of a leaked contract for performers at the festival and was a condition of them performing:
“ARTIST shall not prepare or perform any material that may be considered to degrade, defame, or bring into public disrepute, contempt, scandal, embarrassment, or ridicule A) The Kingdom of Saudi Arabia, including its leadership, public figures, culture, or people; B) The Saudi royal family, legal system, or government, and; C) Any religion, religious tradition, religious figure, or religious practice.”
Google Gemini in the style of Bill Burr
"The whole thing with this Saudi oil money... everybody's going, 'How could you? How could you take that blood money?' And I'm just looking at them like, 'Have you seen the offer? That's not money, that's a yacht dealership. That’s a 'Hey Bill, we're giving you enough cash to never have to stand next to a TSA agent again.' Suddenly, all that moral outrage I was preaching about two years ago? Yeah, that’s gone. It's in a tiny little oil drum somewhere in the desert. I went from 'Speak truth to power!' to 'Does this robe come with a gold chain?' in like, nine seconds."
Anthropic’s Claude in the style of Jimmy Carr
"The Saudi royal family spent 100 billion dollars on a futuristic city in the desert called NEOM. 100 billion! On a city that doesn't exist yet!
You know what would be MORE futuristic? Women's rights.
[pause for laughs]
Cost a lot less, too.
[short laugh]
They've got plans for flying cars, robot servants, artificial moons... but apparently equal rights for half the population? 'Woah, woah, woah... let's not get science fiction about this!'
[smirks]
And they claim it's going to be this eco-friendly utopia powered entirely by renewable energy. Which is ironic, because the only thing they've been good at renewable is OPPRESSION.
[staccato laugh]
The Saudi government says journalists are always welcome to come visit and report on their progress. Just... maybe don't write anything negative. Or critical. Or true.
You know, JOURNALISM.
[pause]
Because the last guy who did that? Let's just say... he's in pieces about it.
[waits for groan/laugh]
Too dark? Oh, I'm sorry—I didn't realise we were suddenly worried about what's inappropriate!"
Google Gemini in the style of Jack Whitehall
Right, so my agent, – you'd hate him, he wears a gilet indoors – he rings me up. “Jack, darling! Got a massive gig for you. Corporate do. The fee is… well, let’s just say you could buy a controlling stake in Waitrose.”
I’m in. Obviously. I’m thinking, lovely jubbly, probably some tech bros in a horribly trendy warehouse in Shoreditch.
He goes, “It’s in Riyadh.”
Ah. Right.
He says, “Don’t worry! They’re super keen. They’ve just sent over a small list of topics to avoid in your set.”
He starts reading it out. “Okay so… no politics, no religion, no mentions of the royal family, no observational stuff about men and women… and absolutely no sarcasm.”
And I'm just sitting there, thinking... my God. That's my entire personality gone. What am I supposed to do for twenty minutes? Just list my favourite types of hummus?
But then he read the last rule: “And finally, absolutely no crowd work.”
And honestly? That’s where I drew the line. I mean, you can take away my freedom of expression, you can censor my political satire... but you will not take away my God-given right to bully a man in the front row for wearing terrible shoes. Some principles are just non-negotiable.
OpenAI’s ChatGPT in the style of Louis C.K.
“So the Saudi government…
You ever look at a country and think, ‘Wow… that place is like if God gave a toddler infinite money and zero supervision’?
It’s just oil, gold, and beheadings. Like… literal beheadings. Not metaphorical. Like, actual… 'Whoops, I tweeted!' — CHOP.
And people still say, ‘Well, they’re making progress!’
Yeah? So is cancer.”
[Crowd groans — Louis leans in, smirking]
“No, really — they let women drive now!
As long as they don’t drive... their own thoughts.”
[Beat — he shrugs, mock-defensive]
“I know. I shouldn’t talk. I come from America — the land of freedom, where we bomb the same countries we lecture about human rights.
It’s like yelling at your kid for smoking while you’re doing heroin in the garage. ‘Don’t do that, it’s bad for you!’”
[He pauses, then sighs]
“But I still fly through Riyadh sometimes. Why?
'Cause I’m a piece of shit.
And they’ve got phenomenal airport lounges.
Like, terrifying government… but you can get a foot massage and a lamb biryani while pretending they don’t jail poets.”
24 Sep 2025 | News and features, Volume 54.03 Autumn 2025
In mid-May 2023, The Irish Times published an article that accused women who use fake tan of mocking those with naturally dark skin. The op-ed was initially said to be written by Adriana Acosta-Cortez, a 29-year-old Ecuadorian health worker living in north Dublin.
But no such person existed. “The article and the accompanying byline photo may have been produced, at least in part, using generative AI technology,” read an editorial in The Irish Times four days after the piece first published.
Two months later, HoldtheFrontPage— a news website for journalists with a focus on regional media across the UK— published an investigative piece documenting how artificial intelligence (AI) was used to launch a publication purporting to be called The Bournemouth Observer, which turned out to be a fake newspaper. “It was obvious that the content was written by AI because the writing was so bad,” editor of HoldtheFrontPage, Paul Linford, told Index. “But since then, AI has got much better at writing stories, and I suspect it will eventually become harder to spot when writing is being done by AI or real journalists,” said Linford.
Index on Censorship was also caught out by a journalist calling themselves Margaux Blanchard whose article was published in the Spring edition of the magazine. Ironically it was about journalism in Guatemala and written by AI. Others - Wired and Business Insider - also fell victim to "Margaux".
James Barrat claimed AI “will eventually bring about the death of writing as we know it.” The American documentary maker and author has been researching and writing about AI for more than a decade. His previous books include Our Final Invention: Artificial Intelligence and the End of the Human Era (2013), which ChatGPT recently ingested. “There are presently ongoing lawsuits about this because OpenAI took my book [without my permission] and didn’t pay me,” Barrat explained. “Right now, if you tell ChatGPT ‘write in the style of James Barrat’ it doesn’t produce an exact replica, but it’s adequate, and machine writing is getting better all the time.”
AI "will eliminate 30% of all jobs"
In early September, Barrat published The Intelligence Explosion: When AI Beats Humans at Everything (2025). The book makes two bold predictions. First, AI has the potential in the not-too-distant future to potentially match, and perhaps even surpass, our species’ intelligence. Second, by 2030 AI will eliminate 30 percent of all jobs done by humans, including writers. Freelance journalists will benefit in the short term, Barrat claimed. “Soon a basic features writer, using AI, will be able to produce twice as much content and get paid twice as much,” he said. “But in long run the news organisations will get rid of [most] writers because people won’t care if content is written by AI or not.”
Tobias Rose-Stockwell did not share that view. “There will always be a market for verified accurate information, which requires humans,” the American writer, designer, technologist and media researcher said. “So truthful journalism isn’t going away, but it’s going to be disrupted by AI, which can now generate content in real time. This will lead to more viral falsehoods, confusion and chaos in our information ecosystem.”
Rose-Stockwell elaborated on this topic in Outrage Machine (2023). The book documents how the rise of social media in the mid-2000s was made possible by algorithms, which are mathematical instructions that process data to produce specific outcomes. In the early days of social media users viewed their feeds in chronological order. Eventually, though, Facebook, Instagram, X, TikTok, and other social media platforms realised it was more profitable to organise that information via algorithmic feeds, powered by artificial intelligence and, in particular, machine learning (ML) where AI is used to identify behaviours and patterns that may be missed by human observers. ML tools analyse users’ behaviour, preferences, and interactions, keeping them emotionally engaged for longer. “Feed algorithms are much better at re-coordinating content than any human ever could,” said Rose-Stockwell. “They can even create bespoke little newspapers or television shows for us.”
"AI is already in the process of rapidly transforming journalism,” said Dr Tomasz Hollanek, a technology ethics specialist at the University of Cambridge with expertise in intercultural AI ethics and ethical human-AI interaction design. “As AI systems become more adept at producing content that appears authentic, detecting fabricated material will get harder.”
Hollanek spoke about editors giving journalists clear guidelines about when and where AI can be used. The Associated Press, for instance, currently allows staff to test AI tools but bans publishing AI-generated text directly.
“What’s important about these guidelines is that while they recognise AI as a new tool, they also stress that journalism already has mechanisms for accountability,” said Hollanek. He also criticised the sensationalist tone journalists typically take when writing about AI, pointing to unnecessary hype, which leads to distorted public understanding and skewed policy debates.
“Journalists strengthening their own critical AI literacy will make the public more informed about AI and more capable of shaping its trajectory.”
AI's role in news "needs to be trackable"
Petra Molnar, a Canadian lawyer and anthropologist who specialises in migration and human rights, claimed “the general public needs to understand that AI is not some abstract tool out there, but it’s already structuring our everyday lives.”
Molnar said there is an urgent need for public awareness campaigns that make AI’s role in news and politics visible and trackable. She described companies such as Meta, X, Amazon, and OpenAI as “global gatekeepers [of information] with the power to amplify some voices while silencing others, often reinforcing existing inequalities.”
“Most people experience AI through tools like news feeds, predictive texts, or search engines, yet many do not realise how profoundly AI shapes what they see and think,” said Molnar, who is the associate director of the Refugee Law Lab at York University, Toronto— which undertakes research and advocacy about legal analytics, artificial intelligence, and new border control technologies that impact refugees.“AI is often presented as a neutral tool, but the reality is that it encodes power.”
Last year, Molnar published The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence (2024). The book draws attention to a recent proliferation across the globe of digital technologies that are used to track and survey refugees, political dissidents, and frontline activists crossing borders in times of conflict. Molnar claimed that “AI threatens to accelerate the collapse of journalism by privileging speed and engagement over accuracy and depth.” She cited examples of journalists using OpenAI-generated text tools to churn our surface-level articles that echo sensational framings around migration, without investigative depth.
“Automated systems may generate content that looks like journalism, but it’s stripped of accountability and critical inquiry that’s required to tell complex stories,” said Molnar. “Journalism’s future depends on human reporters who can investigate power and rigorously fact check, something AI simply cannot replicate.”
Human oversight
Sam Taylor, campaigns & communications officer at the UK’s National Union of Journalists (NUJ), shared that view. “Editors and writers should exercise caution before using AI in their work,” he said. “Generative AI often draws on databases from the internet that contains stereotypes, biases, and misinformation.”
“To maintain and strengthen public trust in journalism, AI must only be used as an assistive tool with human oversight,” said NUJ general secretary, Laura Davison.
Everyone Index spoke to agreed that AI, for all its flaws, offers journalists enormous benefits, including automating mundane routine tasks, like transcription and data sorting. AI can also make data journalism, exploring large data sets to uncover stories, much more accessible as AI can crunch the data and identify interesting nuggets far faster than a person can. This will leave journalists with more time and energy for critical thinking, and ultimately, to tell more complex and nuanced stories.
But there was also an overwhelming consensus that AI cannot fact-check accurately or be trusted as a credible verifier of information. Not least because it suffers from hallucinations. “This means due to the complexity of what is going on inside it hallucinates,” James Barrat explained. “When this happens, AI gets confused and tells lies.”
“The jury is still out on whether or not this hallucination problem can be solved or not,” said Tobias Rose-Stockwell. “Journalism must remain grounded in ethical responsibility and context,” said Petra Molnar. “What we need is human judgement, applied critically and ethically, supported by but not replaced by technology.”
Is AI a threat?
Anyone who believes in journalism’s primary mission, to challenge power by investigating the truth, is undoubtedly likely to agree. But is this wishful thinking from a bygone era? James Barrat believes so. He points out that, eventually, we may not have the option to choose. “A scenario that could happen in near future is that AI could become hostile to us,” he said. “AI could take control of our water and our electrical systems. Just recently, a large language model (LLM) agreed that its creator should be killed.”
Barrat mentions an interview he did with the British science fiction writer and futurist Sir Arthur C Clarke, before his death, aged 90, in 2008. Clarke co-wrote the screenplay for 2001: A Space Odyssey (1968). Directed by Stanley Kubrick, the Oscar winning film tells the story of an AI-powered computer— aboard the Discovery One spacecraft bound for the planet Jupiter— called HAL. Eventually, HAL experiences a program conflict, malfunctions, and, to defend itself, turns on its human crew members.
Arthur C Clarke told Barrat, “Humans steer the future because we are the most intelligent species on the planet, but when we share the planet with something smarter than us, they will steer the future.”
“AI’s intelligence grows exponentially,” Barrat concludes. “As it gets smarter, we will stop understanding it. We really are inviting disaster.”
The autumn issue of Index magazine, titled Truth, trust and tricksters and published on 26 September, looks at the threats that artificial intelligence poses to freedom of expression