Do the dead have free expression?

This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.

“Don’t speak ill of the dead” is an aphorism that dates back centuries, but what if the dead speak ill of you? Over the past few years there has been a rise in the creation of chatbots trained on the social media and other data of the deceased. These griefbots are deepfakes designed to simulate the likeness and the personality of someone after their death, as though they have been brought back as ghosts.

The concept of the griefbot is not new. Our narratives around AI span centuries and the stories about creating an artificial version of a lost loved one can be found in Greek mythology: Laodameia, for example, distraught at losing her husband Protesilaus during the Battle of Troy, commissioned an exact likeness of him. (It did not end well: she was caught in bed with it. Her father, fearing she was prolonging her grief, burned the wax replica husband and Laodameia killed herself to be with Protesilaus.)

Further back, as US academic Alexis Elder has explored, there are precursors to griefbots in classical Chinese philosophy. The Confucian philosopher Xunzi, writing in the third century BCE, described a ritual where the deceased person was deliberately impersonated via a roleplay to allow loved ones the chance to engage with them once more.

These days, sci-fi likes to surface our contemporary fears and the TV shows have notable storylines warning of the pitfalls of resurrecting our loved ones via technology. In the 2013 Black Mirror episode Be Right Back, a grieving woman uses an AI service to talk with her recently deceased partner, desperate for communication that is ultimately doomed to be illusory.

Grief tech hit the headlines in 2020 when US rapper Kanye West gave his then-wife, Kim Kardashian, a birthday hologram of her dead father.

“Kanye got me the most thoughtful gift of a lifetime,” she wrote on social media. “It is so lifelike and we watched it over and over.”

West likely steered the script, which might’ve been obvious when the hologram told Kim she’d married “the most, most, most, most, most genius man in the whole world – Kanye West”.

While the broader public perception of ghostbots is often one of distaste and concern, those who have engaged with the digital echoes of a lost loved one have been surprisingly positive. When we lose someone we love, we do what we can to fix in place our concept of them. We remember and we memorialise: keepsakes and pictures, speaking their names and telling their stories. Having them with us again through technology is compelling. A Guardian newspaper article in 2023 reported users’ sense of comfort and closure at engaging with chatbots of their dead relatives.

“It’s like a friend bringing me comfort,” said one user.

With a potentially huge new market – grief is universal, after all – come the start-ups. Alongside general tools like ChatGPT are the dedicated software products. The US-based HereAfterAI, which bills itself as a ‘memory app’, allows users to record their thoughts, upload photos and grant access of their content to their loved ones. South Korean company DeepBrain AI claims it can build you an avatar of your dead loved one from just a single photo and a 10 second recording of their voice.

Current technology offers us the ‘could we?’, but what about the ‘should we’? In their 2023 paper, Governing Ghostbots, Edina Harbinja, Lilian Edwards and Marisa McVey flagged a very major problem: that of consent.

“In addition to the harms of emotional dependence, abusive communications and deception for commercial purposes, it is worth considering if there is potential harm to the deceased’s antemortem persona,” they wrote.

If we have some ownership of our data when alive, then should we have similar rights after our death? Creating an avatar of someone who is no longer around to approve it means we are literally putting words in someone’s mouth. Those words might be based on sentences they’ve typed and videos they’ve made but these have been mediated through machine learning, generating an approximation of an existence.

There is, of course, the potential that a desire for a sanitised reminder of the deceased means their words are only permitted to be palatable. Content moderation of AI chatbots might mean censorship or moderation – the same that applies to the large language models (LLMs) that drive them. Views could be watered down, and ideologies reconfigured. There is no true freedom of speech in the literal sense, and no objection available to the lack of it. The dead have no redress.

Conversely, what if posthumous avatars are built for political influence? In India in 2024, a deepfake avatar of a woman who had died more than a decade previously – the daughter of the founder of the Tamil Tigers – was shown in a video urging Tamils to fight for freedom. And in the USA, the parents of Joaquin Oliver, killed in a school shooting in Florida in 2018, created an AI version of their son to speak to journalists and members of Congress to push for gun reform. In both the India and USA cases, the griefbot technology did not exist when these people died and they would have had no way of knowing this could happen, let alone be able to consent to it.

Whether we like it or not, most of us will live on digitally when we die. Our presence is already out there in the form of data – all the social media we’ve ever posted, all the photos and videos of us online, our transaction history, our digital footprints. Right now, there is a lack of clear governance. Digital rights vary dramatically from jurisdiction to jurisdiction, and AI regulation is in its infancy. Only the EU and China currently have explicit AI legislation in place with moves afoot in other countries including the USA and UK, but not yet in statute. Amidst all of this, global tech companies get to set the agenda. For now, all we have is the hope that we can set our own personal boundaries for posthumous expression before our grief becomes someone else’s commodity.

Riyadh Comedy Festival: Making the jokes the real comedians can’t

This is the final day of the Riyadh Comedy Festival so we thought we’d publish some jokes audiences probably won’t have heard during the last fortnight.

Index staff have used AI to imagine some gags from artificial facsimiles of stand-ups Bill Burr, Jimmy Carr, Jack Whitehall and Louis C.K. 

We felt compelled to do this because we support those in Saudi Arabia whose voices are so often silenced and those who are currently in prison. Last week we published an article by Ghanem al-Masarir about how he was persecuted as a Saudi comedian, and we remember journalist Jamal Khashoggi who was murdered by the regime in the Saudi embassy in Turkey seven years ago on 2 October – a grim reminder of the stance the Saudi government takes against their critics.

For extra context, this is part of a leaked contract for performers at the festival and was a condition of them performing:

“ARTIST shall not prepare or perform any material that may be considered to degrade, defame, or bring into public disrepute, contempt, scandal, embarrassment, or ridicule A) The Kingdom of Saudi Arabia, including its leadership, public figures, culture, or people; B) The Saudi royal family, legal system, or government, and; C) Any religion, religious tradition, religious figure, or religious practice.”

Google Gemini in the style of Bill Burr

“The whole thing with this Saudi oil money… everybody’s going, ‘How could you? How could you take that blood money?’ And I’m just looking at them like, ‘Have you seen the offer? That’s not money, that’s a yacht dealership. That’s a ‘Hey Bill, we’re giving you enough cash to never have to stand next to a TSA agent again.’ Suddenly, all that moral outrage I was preaching about two years ago? Yeah, that’s gone. It’s in a tiny little oil drum somewhere in the desert. I went from ‘Speak truth to power!’ to ‘Does this robe come with a gold chain?’ in like, nine seconds.”

Anthropic’s Claude in the style of Jimmy Carr

“The Saudi royal family spent 100 billion dollars on a futuristic city in the desert called NEOM. 100 billion! On a city that doesn’t exist yet!

You know what would be MORE futuristic? Women’s rights.

[pause for laughs]

Cost a lot less, too.

[short laugh]

They’ve got plans for flying cars, robot servants, artificial moons… but apparently equal rights for half the population? ‘Woah, woah, woah… let’s not get science fiction about this!’

[smirks]

And they claim it’s going to be this eco-friendly utopia powered entirely by renewable energy. Which is ironic, because the only thing they’ve been good at renewable is OPPRESSION.

[staccato laugh]

The Saudi government says journalists are always welcome to come visit and report on their progress. Just… maybe don’t write anything negative. Or critical. Or true.

You know, JOURNALISM.

[pause]

Because the last guy who did that? Let’s just say… he’s in pieces about it.

[waits for groan/laugh]

Too dark? Oh, I’m sorry—I didn’t realise we were suddenly worried about what’s inappropriate!”

Google Gemini in the style of Jack Whitehall

Right, so my agent, – you’d hate him, he wears a gilet indoors – he rings me up. “Jack, darling! Got a massive gig for you. Corporate do. The fee is… well, let’s just say you could buy a controlling stake in Waitrose.”

I’m in. Obviously. I’m thinking, lovely jubbly, probably some tech bros in a horribly trendy warehouse in Shoreditch.

He goes, “It’s in Riyadh.”

Ah. Right.

He says, “Don’t worry! They’re super keen. They’ve just sent over a small list of topics to avoid in your set.”

He starts reading it out. “Okay so… no politics, no religion, no mentions of the royal family, no observational stuff about men and women… and absolutely no sarcasm.”

And I’m just sitting there, thinking… my God. That’s my entire personality gone. What am I supposed to do for twenty minutes? Just list my favourite types of hummus?

But then he read the last rule: “And finally, absolutely no crowd work.”

And honestly? That’s where I drew the line. I mean, you can take away my freedom of expression, you can censor my political satire… but you will not take away my God-given right to bully a man in the front row for wearing terrible shoes. Some principles are just non-negotiable.

OpenAI’s ChatGPT in the style of Louis C.K.

“So the Saudi government…

You ever look at a country and think, ‘Wow… that place is like if God gave a toddler infinite money and zero supervision’?

It’s just oil, gold, and beheadings. Like… literal beheadings. Not metaphorical. Like, actual… ‘Whoops, I tweeted!’ — CHOP.

And people still say, ‘Well, they’re making progress!’

Yeah? So is cancer.”

[Crowd groans — Louis leans in, smirking]

“No, really — they let women drive now!

As long as they don’t drive… their own thoughts.”

[Beat — he shrugs, mock-defensive]

“I know. I shouldn’t talk. I come from America — the land of freedom, where we bomb the same countries we lecture about human rights.

It’s like yelling at your kid for smoking while you’re doing heroin in the garage. ‘Don’t do that, it’s bad for you!’”

[He pauses, then sighs]

“But I still fly through Riyadh sometimes. Why?

‘Cause I’m a piece of shit.

And they’ve got phenomenal airport lounges.

Like, terrifying government… but you can get a foot massage and a lamb biryani while pretending they don’t jail poets.”

Will artificial intelligence be the death of journalism?

In mid-May 2023, The Irish Times published an article that accused women who use fake tan of mocking those with naturally dark skin. The op-ed was initially said to be written by Adriana Acosta-Cortez, a 29-year-old Ecuadorian health worker living in north Dublin.

But no such person existed. “The article and the accompanying byline photo may have been produced, at least in part, using generative AI technology,” read an editorial in The Irish Times four days after the piece first published.

Two months later, HoldtheFrontPage— a news website for journalists with a focus on regional media across the UK— published an investigative piece documenting how artificial intelligence (AI) was used to launch a publication purporting to be called The Bournemouth Observer, which turned out to be a fake newspaper. “It was obvious that the content was written by AI because the writing was so bad,” editor of HoldtheFrontPage, Paul Linford, told Index. “But since then, AI has got much better at writing stories, and I suspect it will eventually become harder to spot when writing is being done by AI or real journalists,” said Linford.

Index on Censorship was also caught out by a journalist calling themselves Margaux Blanchard whose article was published in the Spring edition of the magazine. Ironically it was about journalism in Guatemala and written by AI.  Others – Wired and Business Insider – also fell victim to “Margaux”.

James Barrat claimed AI “will eventually bring about the death of writing as we know it.” The American documentary maker and author has been researching and writing about AI for more than a decade. His previous books include Our Final Invention: Artificial Intelligence and the End of the Human Era (2013), which ChatGPT recently ingested. “There are presently ongoing lawsuits about this because OpenAI took my book [without my permission] and didn’t pay me,” Barrat explained. “Right now, if you tell ChatGPT ‘write in the style of James Barrat’ it doesn’t produce an exact replica, but it’s adequate, and machine writing is getting better all the time.”

AI “will eliminate 30% of all jobs”

In early September, Barrat published The Intelligence Explosion: When AI Beats Humans at Everything (2025). The book makes two bold predictions. First, AI has the potential in the not-too-distant future to potentially match, and perhaps even surpass, our species’ intelligence. Second, by 2030 AI will eliminate 30 percent of all jobs done by humans, including writers. Freelance journalists will benefit in the short term, Barrat claimed. “Soon a basic features writer, using AI, will be able to produce twice as much content and get paid twice as much,” he said. “But in long run the news organisations will get rid of [most] writers because people won’t care if content is written by AI or not.”

Tobias Rose-Stockwell did not share that view. “There will always be a market for verified accurate information, which requires humans,” the American writer, designer, technologist and media researcher said. “So truthful journalism isn’t going away, but it’s going to be disrupted by AI, which can now generate content in real time. This will lead to more viral falsehoods, confusion and chaos in our information ecosystem.”

Rose-Stockwell elaborated on this topic in Outrage Machine (2023). The book documents how the rise of social media in the mid-2000s was made possible by algorithms, which are mathematical instructions that process data to produce specific outcomes. In the early days of social media users viewed their feeds in chronological order. Eventually, though, Facebook, Instagram, X, TikTok, and other social media platforms realised it was more profitable to organise that information via algorithmic feeds, powered by artificial intelligence and, in particular, machine learning (ML) where AI is used to identify behaviours and patterns that may be missed by human observers. ML tools analyse users’ behaviour, preferences, and interactions, keeping them emotionally engaged for longer. “Feed algorithms are much better at re-coordinating content than any human ever could,” said Rose-Stockwell. “They can even create bespoke little newspapers or television shows for us.”

“AI is already in the process of rapidly transforming journalism,” said Dr Tomasz Hollanek, a technology ethics specialist at the University of Cambridge with expertise in intercultural AI ethics and ethical human-AI interaction design. “As AI systems become more adept at producing content that appears authentic, detecting fabricated material will get harder.”

Hollanek spoke about editors giving journalists clear guidelines about when and where AI can be used. The Associated Press, for instance, currently allows staff to test AI tools but bans publishing AI-generated text directly.

“What’s important about these guidelines is that while they recognise AI as a new tool, they also stress that journalism already has mechanisms for accountability,” said Hollanek. He also criticised the sensationalist tone journalists typically take when writing about AI, pointing to unnecessary hype, which leads to distorted public understanding and skewed policy debates.

“Journalists strengthening their own critical AI literacy will make the public more informed about AI and more capable of shaping its trajectory.”

AI’s role in news “needs to be trackable”

Petra Molnar, a Canadian lawyer and anthropologist who specialises in migration and human rights, claimed “the general public needs to understand that AI is not some abstract tool out there, but it’s already structuring our everyday lives.”

Molnar said there is an urgent need for public awareness campaigns that make AI’s role in news and politics visible and trackable. She described companies such as Meta, X, Amazon, and OpenAI as “global gatekeepers [of information] with the power to amplify some voices while silencing others, often reinforcing existing inequalities.”

“Most people experience AI through tools like news feeds, predictive texts, or search engines, yet many do not realise how profoundly AI shapes what they see and think,” said Molnar, who is the associate director of the Refugee Law Lab at York University, Toronto— which undertakes research and advocacy about legal analytics, artificial intelligence, and new border control technologies that impact refugees.“AI is often presented as a neutral tool, but the reality is that it encodes power.”

Last year, Molnar published The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence (2024). The book draws attention to a recent proliferation across the globe of digital technologies that are used to track and survey refugees, political dissidents, and frontline activists crossing borders in times of conflict. Molnar claimed that “AI threatens to accelerate the collapse of journalism by privileging speed and engagement over accuracy and depth.” She cited examples of journalists using OpenAI-generated text tools to churn our surface-level articles that echo sensational framings around migration, without investigative depth.

“Automated systems may generate content that looks like journalism, but it’s stripped of accountability and critical inquiry that’s required to tell complex stories,” said Molnar. “Journalism’s future depends on human reporters who can investigate power and rigorously fact check, something AI simply cannot replicate.”

Human oversight

Sam Taylor, campaigns & communications officer at the UK’s National Union of Journalists (NUJ), shared that view. “Editors and writers should exercise caution before using AI in their work,” he said. “Generative AI often draws on databases from the internet that contains stereotypes, biases, and misinformation.”

“To maintain and strengthen public trust in journalism, AI must only be used as an assistive tool with human oversight,” said NUJ general secretary, Laura Davison.

Everyone Index spoke to agreed that AI, for all its flaws, offers journalists enormous benefits, including automating mundane routine tasks, like transcription and data sorting. AI can also make data journalism, exploring large data sets to uncover stories, much more accessible as AI can crunch the data and identify interesting nuggets far faster than a person can. This will leave journalists with more time and energy for critical thinking, and ultimately, to tell more complex and nuanced stories.

But there was also an overwhelming consensus that AI cannot fact-check accurately or be trusted as a credible verifier of information. Not least because it suffers from hallucinations. “This means due to the complexity of what is going on inside it hallucinates,” James Barrat explained. “When this happens, AI gets confused and tells lies.”

“The jury is still out on whether or not this hallucination problem can be solved or not,” said Tobias Rose-Stockwell. “Journalism must remain grounded in ethical responsibility and context,” said Petra Molnar. “What we need is human judgement, applied critically and ethically, supported by but not replaced by technology.”

Is AI a threat?

Anyone who believes in journalism’s primary mission, to challenge power by investigating the truth, is undoubtedly likely to agree. But is this wishful thinking from a bygone era? James Barrat believes so. He points out that, eventually, we may not have the option to choose. “A scenario that could happen in near future is that AI could become hostile to us,” he said. “AI could take control of our water and our electrical systems. Just recently, a large language model (LLM) agreed that its creator should be killed.”

Barrat mentions an interview he did with the British science fiction writer and futurist Sir Arthur C Clarke, before his death, aged 90, in 2008. Clarke co-wrote the screenplay for 2001: A Space Odyssey (1968). Directed by Stanley Kubrick, the Oscar winning film tells the story of an AI-powered computer— aboard the Discovery One spacecraft bound for the planet Jupiter— called HAL. Eventually, HAL experiences a program conflict, malfunctions, and, to defend itself, turns on its human crew members.

Arthur C Clarke told Barrat, “Humans steer the future because we are the most intelligent species on the planet, but when we share the planet with something smarter than us, they will steer the future.”

“AI’s intelligence grows exponentially,” Barrat concludes. “As it gets smarter, we will stop understanding it. We really are inviting disaster.”

The autumn issue of Index magazine, titled Truth, trust and tricksters and published on 26 September, looks at the threats that artificial intelligence poses to freedom of expression

Tariffs and tight control

This week, the global conversation was dominated by one word: tariffs. China was no exception, but not all conversations were allowed to unfold freely. On major Chinese social media platforms, searches for “tariff” and “104” (a numeric stand-in) led to dead ends, error messages or vanishing posts. It wasn’t silence across the board, though. Some conversations weren’t just permitted, they were actively promoted. State broadcaster CCTV pushed a hashtag that quickly went viral: #UShastradewarandaneggshortage. Meanwhile, posts encouraging Chinese alternatives to US goods saw a notable boost from platform algorithms.

To outsiders, this patchwork of censorship versus amplification might seem chaotic or contradictory. In reality, it follows a clear, strategic logic. China’s censorship system is built on a few core principles: block anything that goes viral and paints the government in a bad light, suppress content that risks sparking public anger or social unrest, and amplify posts that reflect well on the nation or state. At its heart, it’s about control – of the message, the momentum and the mood. “Saving face” isn’t just cultural etiquette in China, it’s political strategy.

Curiously, this is not only a top-down game. A significant driver of online sentiment today is cyber nationalism, a fast-growing trend where patriotic fervour, often fuelled by influencers, bloggers and grassroots communities, aligns with state objectives. Cyber nationalism is both tolerated and profitable. Pro-nationalist influencers can rake in millions in ad revenue and merchandise sales. The state, in turn, benefits from a wave of popular support that looks organic, and is, to a degree. But there are limits. These nationalist fires are only allowed to burn within a safe perimeter.

When it comes to the trade war, China’s censors are turning “crisis” into “opportunity”, wrote Manya Koetse on What’s On Weibo. Unless there’s a u-turn, the outlook for many Chinese people could darken – except if you’re employed as part of the booming censorship industry. That said, even there job security isn’t guaranteed: in another example of politics aligning with profit, online censorship is increasingly automated through AI. So while Washington and Beijing trade blows, China’s digital censors are aiding the government line – and scaling it too.

PS. if you want more on the inner workings of Chinese censors, read this excellent article from two years ago about how local TV stations air stories on government corruption in a way that ultimately benefits the government.

SUPPORT INDEX'S WORK