6 Nov 2025 | Americas, Digital rights, Europe and Central Asia, News, United Kingdom, United States, Volume 54.03 Autumn 2025
This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.
“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.
Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.
The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.
Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.
“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.
“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”
This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.
He believes this could have significant implications for freedom of thought and freedom of expression.
“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”
The case for free speech
As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.
The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.
The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.
Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.
The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.
There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.
“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”
AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”
Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.
Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.
“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”
For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.
“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”
AI rights or human rights?
The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.
Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.
Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.
“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”
For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.
Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”
Liability in the spotlight
Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.
The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.
This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.
She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.
“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.
She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.
The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.
Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”
Legal loopholes
As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.
The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.
The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.
Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.
The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.
Legislate or innovate
Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.
In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.
Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.
“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”
The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.
“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.
“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”
As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.
“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.
7 Oct 2025 | Europe and Central Asia, News, United Kingdom
Transnational repression (TNR) allows states and their proxies to reach across national borders to intimidate, threaten and force silence, targeting everyone who speaks out in the public interest, wherever they are. Index has documented TNR targets across society, including journalists, artists, writers, academics, opposition leaders and members of marginalised groups such as Uyghurs and Tibetans.
Yesterday, Index joined other human rights organisations, academics, legal experts and TNR targets calling on the Office for Students and UK Government to establish robust protections for all academics, students and support staff against TNR in the higher education sector. This followed threats made against Roshaan Khattak, a Pakistani human rights defender and film maker, while he was researching enforced disappearances in Balochistan, a province of Pakistan, at the University of Cambridge.
The letter highlights the challenges he has faced, the gaps in the institution’s response to the threats and what the broader sector must to do ensure everyone in the academic space is protected.
Read the letter below
Sent Electronically
Susan Lapworth
Chief Executive
Office for Students (OfS)
Nicholson House
Castle Park
Bristol BS1 3LH
Cc: The Rt. Hon. Bridget Phillipson MP, Secretary of State for Education
Professor Arif Ahmed, OfS Director for Freedom of Speech and Academic Freedom
6 October 2025
As demonstrated by the threats to Cambridge post-graduate student Roshaan Khattak, the Office for Students and the broader higher education sector must establish robust protections against Transnational Repression.
Dear Ms Lapworth,
We, the undersigned organisations and individuals, write to call on the Office for Students, as well as the broader Higher Education sector, to establish tailored and robust protections for academics, students and support staff facing threats of transnational repression (TNR). This follows significant concerns regarding the response of the University of Cambridge to threats made against Mr Roshaan Khattak, a Pakistani filmmaker and human rights defender enrolled as a postgraduate researcher at the institution. This case is illustrative of the threats facing academic inquiry and the need for significant action. As a result, we call on the Office for Students (OfS) to establish policies that relate to universities’ obligations to establish protocols to respond to acts of TNR against their staff, students and the wider academic community.
The UK Government has described TNR as “crimes directed by foreign states against individuals”. While a global phenomenon, examples of TNR in the UK have been documented targeting journalists, human rights defenders, academics and members of diaspora or exile communities based inside the UK by repressive regimes such as Iran, Russia, Pakistan, and China (as well as Hong Kong), as well as democracies with weak institutional protections. The central goal of TNR is to exert state control and censorship beyond state borders to intimidate critics into silence, stifle protected speech and undermine the safety and security of those based in other jurisdictions. Earlier this year, the Joint Committee on Human Rights published a report on TNR following a public inquiry on the issue, which stated “[d]espite the seriousness of the threat, the UK currently lacks a clear strategy to address TNR”. We believe that in the context of higher education, TNR represents a significant threat to students’ ability to “access, succeed in, and progress from higher education” and benefit from “a high quality academic experience”.
The threats facing Roshaan Khattak are illustrative of this risk. On 21 December 2024 Mr Khattak received a message warning that neither Cambridge nor the UK is “safe” for him or his family if he continues his research into enforced disappearances in Balochistan (a province in Pakistan). While the origin of the threat is unknown, there are allegations that the Pakistan military and Inter-Services Intelligence (ISI) agency have targeted those in exile, including Shahzad Akbar and journalists Syed Fawad Ali Shah and Ahmed Waqass Goraya. This also comes at a time when work on human rights violations in Balochistan is increasingly dangerous, as evidenced by the suspicious deaths of Sajid Hussain and Karima Baloch. Despite police awareness of the threat, Mr Khattak reports that his progress towards his PhD has been stopped for now, with Wolfson College having also repeatedly cancelled meetings, revoked his accommodation and changed the locks to his room without notice, limiting access to and compromising his sensitive research materials and data. They have also encouraged him to fundraise from the Baloch community in the UK to secure private accommodation, therefore disregarding the university’s responsibilities to him. We believe that the university should be exploring ways to ensure Mr Khattak’s safety, in collaboration with the relevant authorities, instead of trying to put him out of sight, out of mind. MPs including John McDonnell and Daniel Zeichner, as well as the UN Special Rapporteur on Human Rights Defenders, Mary Lawlor, and other leading human rights defenders have raised awareness of this case or shared their concerns with the University. Additionally, McDonnell has submitted an Early Day Motion in UK Parliament, backed by cross-party support, drawing attention to the threats faced by Roshaan and the wider impact of TNR on UK academia.
The Higher Education and Research Act 2017 outlines OfS’s “duty to protect academic freedom”, while also establishing the legal underpinning for OfS’s regulatory framework which states that both “academic freedom” and “freedom of speech” are public interest governance principles, which should be upheld by all higher education institutions. Further to this, the Higher Education (Freedom of Speech) Act 2023, amends the 2017 Act to require institutions to establish codes of practice as it relates to their procedures to protect free speech and for the OfS to establish a free speech complaints scheme. These, as well as the “Regulatory advice 24: Guidance related to freedom of speech”, which came into force in August, establish an important baseline. However, in response to the impact of TNR on free speech and academic freedom, the OfS must build on this to establish specific and tailored responses for academics, students, staff and all university personnel as it relates to TNR.
Due to our concerns related to the absence of sector-wide protections against TNR, as evidenced by the University of Cambridge’s handling of the threats against Mr Khattak and the implications they have on his ability to continue his academic work and express himself freely, we request the OfS to:
1. Review the adequacy of existing sector-wide guidance to ensure it can protect academics, students and other relevant stakeholders from transnational repression;
2. Establish tailored and specific policies as it relates to transnational repression to offer support for the targets and practical guidance for the broader higher education sector. This should include methods by which all relevant authorities, such as the police can be engaged with constructively; and,
3. Commit to report publicly on findings and any regulatory action taken as it relates to TNR, to assure current and prospective students that UK higher-education providers will not yield to acts or threats of TNR.
The undersigned organisations believe that Mr Khattak’s situation is a wake-up call for the higher education sector as it relates to defending both student welfare and the principle of academic freedom in the face of transnational repression. A robust response from OfS will not only safeguard one vulnerable researcher but also support other institutions and at-risk academics who may be facing similar concerns or threats.
We stand ready to provide further documentation or expert testimony and would welcome the opportunity to discuss this matter with your team.
Yours sincerely,
Index on Censorship
Peter Tatchell Foundation
Amnesty International UK
National Union of Journalists
ARTICLE 19
Cambridge University Amnesty Society
Martin Plaut, Senior Research Fellow, Institute of Commonwealth Studies
Dr. Andrew Chubb, Senior Lecturer in Chinese Politics and International Relations, Lancaster University
Sayed Ahmed Alwadaei, Advocacy Director, Bahrain Institute for Rights and Democracy (BIRD)
Salman Ahmad, UN Goodwill Ambassador, HRD, Author, Professor at City University of New York-Queens College, Target of TNR
Marymagdalene Asefaw, DESTA MEDIA, Target of TNR
Maria Kari, human rights attorney, Founder, Project TAHA
Professor Michael Semple, Senator George J. Mitchell Institute for Global Peace, Security and Justice; Former Deputy to the European Union Special Representative in Afghanistan; Former United Nations Political Official
Hussain Haqqani, former ambassador; currently Senior Fellow and Director for South and Central Asia, Hudson Institute, Washington D.C.
Dr. James Summers, Senior Lecturer in international law, Lancaster University
Dr. Thomas Jeff Miley, Lecturer of Political Sociology, Fellow of Darwin College, University of Cambridge
Aqil Shah, Adjunct Associate Professor, School of Foreign Service, Georgetown University; non-resident scholar at Carnegie Endowment for International Peace
Ahad Ghanbary, TNR Target
Dr. Lucia Ardovini, Lecturer in International Relations, Lancaster University
Dr. John McDaniel, Lecturer in Criminal Justice and Crime, Lancaster University
Yana Gorokhovskaia, Ph.D., Research Director for Strategy and Design, Freedom House
Afrasiab Khattak, Former Chairperson of Human Rights Commission of Pakistan (HRCP), former Senator
Professor Pervez Hoodbhoy, nuclear physicist, nuclear disarmament advocate, public intellectual
Taha Siddiqui, Pakistani journalist in exile (NYTimes, Guardian, France24), Founder The Dissident Club
Shahzad Akbar, Barrister, human rights lawyer, TNR acid attack victim, founder Dissidents United
5 Sep 2025 | Americas, Asia and Pacific, Canada, Europe and Central Asia, Georgia, Nepal, News
Bombarded with news from all angles every day, important stories can easily pass us by. To help you cut through the noise, every Friday Index publishes a weekly news roundup of some of the key stories covering censorship and free expression. This week, we look at the Alberta school library book ban, and the sentencing of twenty protesters in Georgia.
Alberta pauses controversial book ban amid backlash
The government of Alberta has paused a proposed book ban, which aimed to take out books from school libraries which contained what the authorities called “explicit sexual content”
Books such as Margaret Atwood’s The Handmaid’s Tale, Maya Angelou’s I Know Why the Caged Bird Sings and Aldous Huxley’s Brave New World were included in a list of more than 200 that would be removed under the new measures.
There was a public outcry and Atwood released a short story on social media, stating: “Here’s a piece of literature by me, suitable for 17-year-olds in Alberta schools, unlike — we are told — The Handmaid’s Tale.”
Now, Alberta Premier Danielle Smith says she has pressed pause in order to review the policy and “preserve access to classic literature.”
The Christian parents group Action4Canada had previously hailed the book ban as a “great victory” following a meeting with the state’s education minister.
Byline Times Journalists denied access to Conservative Party annual conference.
The UK Conservative Party has banned Byline Times from attending its annual conference, refusing to give an explanation as to why.
It has been normal practice for political parties to allow journalists from established outlets to cover their annual gatherings which take place in the autumn. However in recent years that convention has been eroded.
The Labour Party was criticised in 2024 by Reporters without Borders for refusing to accredit critical journalist John McEvoy from Declassified.
And in 2023 the Conservative party faced an accusation of discrimination, when some journalists were forced to pay for entry whilst others were not. In the same year 2023 the Scottish Tory Party tried to restrict a q&a session with then Prime Minister Rishi Sunak to only six carefully chosen outlets.
Nigel Farage’s Reform Party last year also banned Byline Times from attending their conference as well as Carole Cadwalladr from the Observer.
Anti-Government protestors sentenced in Georgia amid torture allegations
Georgian courts have sentenced 20 protesters including actor Andro Chichinadze and activist Saba Skhvitardze to prison in connection with anti-government rallies.
Skhvitaridze, who was arrested on 5 December, alleges that he faced torture whilst in prison, a claim that according to Amnesty has not been properly investigated. He was jailed for two years after being found guilty of causing “intentional bodily harm” to a police officer during a protest.
Chichinadze, who was also sentenced to two years following charges of disruption of public order said: “I want to address the prosecutors and you from my side, I forgive what you have been doing to me for so long.”
Georgia has faced widespread demonstrations following the 2024 parliamentary elections, which saw the ruling Georgian Dream party secure victory. Claims of electoral fraud triggered the protests as well as the arrest of opposition leader Zurab Japaridze who has not only now been jailed for seven months but barred from holding public office for two years.
Social media platforms banned in Nepal
Nepal’s Ministry of Communications has issued a ban on all social media platforms that failed to register with the government following a 25 August directive.
The ban comes following a Supreme Court ruling from 17 August that required the registration of online platforms in order to “monitor disinformation”.
Multiple large platforms, including Facebook, YouTube and Reddit failed to register before the deadline. Japanese social media Viber and Chinese owned TikTok remain accessible.
The Committee to Protect Journalists has warned that the ban severely undermines press freedom and the public access to information, urging the government to reverse its decision.
CPJ Regional Director Beh Lih Yi said: “Blocking online news platforms vital to journalists will undermine reporting and the public’s right to information. The government must immediately rescind this order and restore access to social media platforms, which are essential tools for exercising press freedom.”
27 Jun 2025 | Africa, Americas, Asia and Pacific, Australia, Belarus, Europe and Central Asia, Hungary, Israel, Kenya, Middle East and North Africa, News, Palestine, United States
In the age of online information, it can feel harder than ever to stay informed. As we get bombarded with news from all angles, important stories can easily pass us by. To help you cut through the noise, every Friday Index publishes a weekly news roundup of some of the key stories covering censorship and free expression. This week, we look at Hungary’s banned Pride demonstration, and mass anti-government protests in Kenya.
Pride in spite of the law: Hungary’s LGBTQ+ march to go ahead in violation of police ban
On Tuesday 18 March, Hungary’s ruling Fidesz party led by Viktor Orbán rushed a bill through parliament banning LGBTQ+ pride marches, sparking outrage from the EU and activists. The ban was made on the grounds that such events are allegedly harmful to children, with Orbán stating “We won’t let woke ideology endanger our kids.” This put Budapest’s annual Pride march, scheduled to take place on Saturday 28 June, in jeopardy – but Hungary’s LGBTQ+ community is refusing to back down.
The march, which marks the 30th anniversary of Budapest Pride, is scheduled to go ahead with backing from Budapest’s liberal mayor, who has taken the step of organising the event through the city council under the name “Day of Freedom” to circumvent the law against LGBTQ+ gatherings – but the city police, still under the control of Fidesz, will be moving to quash these efforts. Those partaking in the event will be targeted by facial recognition technology and could face fines. With more than 200 Amnesty International delegates set to march alongside thousands of Hungarians in solidarity, Saturday is likely to see a clash between Hungary’s LGBTQ+ community and the state police.
Brutality begets brutality: Kenyan protests against government cruelty result in further loss of life
On 25 June 2024, a mass protest outside parliament in Nairobi against tax rises escalated into a tragedy, with Kenyan police officers firing on protesters as they attempted to storm the parliament building. The Kenyan National Commission on Human Rights announced that 39 had been killed in the nationwide demonstrations, and it was recently revealed by BBC Africa Eye that some officers had shot and killed unarmed protesters. Marking a year since this incident, Kenyans took to the streets this week to demonstrate against the government, and further brutality has followed.
Amnesty International Kenya has reported that 16 people were killed at the anniversary protests on 25 June 2025, with approximately a further 400 injured. CNN witnessed police firing live ammunition to disperse peaceful protesters, and the government regulator, Communications Authority of Kenya, issued an order for all local TV and radio stations to stop broadcasting live coverage of the protests. Tensions have been on the rise in recent months, with the murder of Kenyan blogger Albert Ojwang in police custody, and the shooting of street vendor Boniface Kariuki at a demonstration in Ojwang’s honour inflaming the situation further.
Free at last: Pro-Palestinian student activist Mahmoud Khalil released
Palestinian-Algerian activist and Columbia University student Mahmoud Khalil was released from his detention in a Louisiana Immigration and Customs Enforcement (ICE) facility on the evening of Friday 20 June after 104 days in detention.
Khalil’s arrest sparked a national outcry. A prominent pro-Palestinian activist on Columbia’s campus, he would sometimes act as a spokesperson for the student protest movement, making him a prime target for ICE’s crackdown on immigrant protesters – despite Khalil holding a green card, which grants an individual lawful permanent resident status in the USA.
He was arrested without a warrant on 8 March 2025. Charged with no crime, Khalil was earmarked for deportation by Secretary of State Marco Rubio under the belief that his presence in the country had “foreign policy consequences”. This move was deemed unconstitutional, and Khalil was released after a Louisiana judge ruled that Khalil was neither a flight risk nor dangerous, and that his prolonged detention – which led to him missing the birth of his son – was potentially punitive.
Khalil returned to the frontlines of protests just days after his release, but his feud with the Donald Trump administration is far from over. The government is reportedly set to appeal the ruling to release Khalil, and rights groups such as the American Civil Liberties Union (ACLU) have suggested that there could be a long legal road ahead.
Unfairly dismissed: Australian journalist wins court case after losing her job over Gaza repost
Australian journalist Antoinette Lattouf has won her court case against Australia’s national broadcaster, Australian Broadcasting Corporation (ABC), with a judge ruling she was unfairly dismissed from her job after sharing a post on social media about the Israel-Gaza conflict.
Lattouf reportedly shared a post by Human Rights Watch that accused Israel of committing war crimes in Gaza in December 2023, resulting in her sacking from her fill-in radio presenter role just hours later.
ABC claimed that the post violated its editorial policy, but after the ruling has apologised to the journalist, saying it had “let down our staff and audiences” in how it handled the matter. According to The Guardian, the broadcaster had received a “campaign of complaints” from the moment Lattouf was first on air, accusing her of anti-Israel bias based on her past social media activity. It has also been reported that due process around Lattouf’s dismissal was not followed, with the allegations in the email complaints not put to her directly prior to her sacking.
Justice Darryl Rangiah ruled that Lattouf had been fired “for reasons including that she held a political opinion opposing the Israeli military campaign in Gaza”, in violation of Australia’s Fair Work Act. Lattouf was awarded 70,000 Australian dollars ($45,000) in damages. She told reporters outside the courtroom “I was punished for my political opinion”.
Sudden freedom: 14 Belarusian political prisoners freed from prison following US official visit
During the visit of the US special envoy Keith Kellogg to Belarus’s capital Minsk, dictator Alyaksandr Lukashenka made the surprise move of releasing 14 political prisoners from detention on 21 June 2025. The US brokered deal, reportedly led by Kellogg, saw the release of prominent Belarusian activist Siarhei Tsikhanouski who was arrested in 2020 and sentenced with 18 years in prison after declaring his intention to run for president. Also released was journalist Ihar Karnei who worked at Radio Free Europe/Radio Liberty for more than 20 years.
Tsikhanouski has recounted his experience in prison as being “torture”. He said he was kept in solitary confinement and denied adequate food and medical care, and he lost more than 100 pounds during his five years’ imprisonment. He told the Associated Press that prison officials would mock him, saying “You will be here not just for the 20 years we’ve already given you – we will convict you again” and “You will die here.”
Tsikhanouski is the husband of Sviatlana Tsikhanouskaya, who following his arrest took his place in running for president and became the main opposition leader in Belarus. Now living in exile in Lithuania, the two have been reunited in Vilnius – but Tsikhanouskaya insists that her work is not finished with reportedly more than 1,100 political prisoners still remaining inside Belarusian jails.