PEN Canada
The PEN American Center is part of International PEN. It is an association of writers working to advance literature, defend free expression, and foster international literary fellowship.
The PEN American Center is part of International PEN. It is an association of writers working to advance literature, defend free expression, and foster international literary fellowship.
A year ago, I asked whether academic freedom could survive Donald Trump’s plans for thought control. We now have the answer. Trump’s most effective weapon to this end has been the financial mechanisms linking state and academia. In the first week of his presidency, Trump ordered a “temporary pause” on billions of dollars in funding for education and scientific research already approved by Congress. This was followed by a wave of 30 Executive Orders and legislation relating to higher education in the first 75 days of the new administration. Collectively, these have had a devastating impact on independent research, threatening to engineer compliant instruction in America’s universities.
The trend toward limiting academic freedom is not limited to the United States. In the United Kingdom, research intensive universities have begun to prepare for the worst. As reported in The Times of London this week, Cambridge University have been “cosying up” to Nigel Farage’s Reform Party, amid fears that it will copy Trump’s approach to academic freedom if they form the next UK government. During the electoral campaign last year, Reform promised to “cut funding to universities that undermine free speech [sic]”; with this threat in mind, Cambridge’s vice-chancellor Deborah Prentice warned the university’s governing council that “what the US example reminds you is you have to worry about what’s coming next.”
A mapping of the impact of the Trump administration’s cull by the Center for American Progress documented that it had targeted the termination of more than 4,000 grants across over 600 universities and colleges across the country, alongside funding cuts of between $3.3 billion and $3.7 billion. In the resulting fallout, clinical trials for cancer, covid and minority health have been stopped, satellite missions halted, and climate centres closed.
Funding freezes have been justified on the pretext of allegations of antisemitism in America’s universities, alongside claims that diversity, equity and inclusion (DEI) practices constitute “discrimination” against some students. According to a memo dispatched by the Executive Office of the President in January 2025, “[t]he use of Federal resources to advance Marxist equity, transgenderism, and green new deal social engineering policies is a waste of taxpayer dollars that does not improve the day-to-day lives of those we serve.”
This dual framing produces contradictory and uneven demands: universities are under pressure to suppress some forms of free expression while tolerating others. In March, Trump warned institutions that a failure to crack down on “illegal protests” could jeopardise their eligibility for federal funding. DEI was cast as evidence of thought policing; professors have lost funding for researching “woke” subjects, and even been fired for allegedly teaching “gender ideology”. All this reinforces a climate in which activities or speech seen as “liberal” are punished, while opinions aligned with the administration are protected. This perception was reinforced by the firing of up to 40 educators for comments made on social media following the assassination of Charlie Kirk in September, leaving many professors unsure what they can say online.
The first casualty was Columbia University – $400m in grants were pulled over campus protests – the university settled, as did Brown. The Trump administration also dramatically ramped up enforcement of university reporting of large foreign gifts or contracts from countries like China and the Middle East. Several top institutions including Berkeley and Harvard
An article in Inside Higher Ed provides a vivid account from a PhD student of the impact of this squeeze on higher education in the United States. “Our institution is just scrambling to figure out what DEI is and what programs will be affected,” the doctoral researcher said. “I study the development of disease, which tends to affect populations of certain ethnic and cultural backgrounds more than others. Is that DEI?”
According to a poll of 1600 scientists conducted by Nature, three-quarters of respondents were considering leaving the United States following the Trump upheaval, with Europe and Canada cited as the most favoured destinations for relocation. This is hardly surprising, given the uncertainty of the moment. But is the grass truly greener on the other side? The events of the last year have sent tremors internationally, largely because of the influential status and respect accorded to US academia. As Rob Quinn, executive director of US body Scholars At Risk, told The Guardian, “We are witnessing an unprecedented situation – really as far as I can tell in history – where a global leader of education and research is voluntarily dismantling that which gave it an advantage.”
As noted above, there are fears of a similar attack on higher education in the United Kingdom. Universities are already facing similar dilemmas concerning contradictory interpretations of the right to free speech. The Office of Students has threatened to sanction universities if campus protests over Palestine and the war in Gaza are deemed to constitute “harassment and discrimination” – while in parallel rolling out similar sanctions against universities for actions taken to prevent transphobic abuse and harassment. Countries around the world are watching developments with apprehension and Scholars At Risk have warned that the Trump administration’s assault on universities is turning the US into a “model for how to dismantle” academic freedom.
Jon Fansmith, senior vice president for government relations and national engagement at the American Council on Education, has argued that the Trump administration’s actions are not in accordance with the law. “They don’t have any statutory or regulatory authority to suspend research on the basis of accusations.” Fansmith sees the freezes as a way “to force a negotiation so they can claim victory when they lack any sort of authority or any sort of evidence that would allow them to do it in the appropriate way.”
In October, dealmaker-in-chief Trump offered a “compact” to nine universities, offering them preferential funding arrangements if they acceded to a list of demands. These, PEN America reported, included a prohibition on employees “making statements on social or political matters on behalf of the university”, and screening international students for “anti-American values.” Other requirements included an enforcement of a binary definition of gender, a freeze on tuition rates charged to American students for five years, and the removal of diversity as a factor for consideration in admissions decisions. Seven of the nine targeted institutions declined the offer and no major research universities agreed to sign; it seems clear that entering into such a compact would, in effect end academic independence and institutional autonomy.
The Trump administration’s tactic of extracting concessions by manufacturing crises that it then offers to resolve has had some wins though, with some universities “obeying in advance” as Timothy Snyder might say. Under significant pressure – by way of a $790 million funding freeze and a Title VI civil rights investigation – Northwestern University recently reached a $75 million settlement (albeit without conceding liability) with the Trump administration. As part of the settlement agreeement, Northwestern agreed to investigate claims of antisemitism and make statements on transgender issues that reflected Trump’s Executive Order on the issue, and promised that admissions procedures will no longer take into account “race, color, or national origin”.
Beyond funding, accreditation has become another pressure point, with professional bodies being pushed by authorities to eliminate requirements relating to diversity or social justice. The American Bar Association, for example, is reviewing its accreditation standards and has suspended enforcement of its DEI standard for law schools – an indication of the federal government’s success in pushing accreditation bodies into shifting existing norms.
All this said: in the face of potentially dire outcomes, a number of states, universities and grantees have challenged the Trump imperative in court, offering to the academic community examples of principled resistance and coalition building. Even as UCLA continued to negotiate a $1 billion fine levied on it by the administration, its frustrated faculty launched a suit to defend the institution, successfully securing a preliminary injunction preventing Government from using funding threats to override the First Amendment.
Mechanisms like regulatory friction, funding conditions, and culture war mobilisation do not need to eliminate dissent for their effect to be felt. They only need to make dissent administratively burdensome and financially risky. Academic freedom in a democracy dies not through troops taking direct control of campus, but in thousands of bureaucratic changes and risk-averse decisions – each justified as temporary, each rationalised as necessary. University administrations tend to see a clear strategic trade-off between short-term compliance and securing resources for the longer term. But the cost of this trade-off is sacrificing the freedom to think and speak that would be impossible to reverse: turning independent research, in effect, into a theatre of political compliance. When the world’s most powerful research sector is pressured into ideological alignment, it also sends a powerful message to far right political movements in the United Kingdom and everywhere else: independent scholarship can be subordinated, teachers tamed, compliance secured, if you simply follow the Trump model. The stakes could not be higher, and American universities must unite in support of their faculty to both defeat the current assault and win the larger war.
This article first appeared in Volume 54, Issue 3 of our print edition of Index on Censorship, titled Truth, trust and tricksters: Free expression in the age of AI, published on 30 September 2025. Read more about the issue here.
“Freedom of speech belongs to humans, not to artificial intelligence,” a Polish government minister said in July.
Krzysztof Gawkowski, the deputy prime minister and digital affairs minister, was speaking to RMF FM radio after Elon Musk’s AI chatbot Grok – which is integrated with his social media platform X – issued a series of posts offending Polish politicians, including Prime Minister Donald Tusk.
The incident, which was reported to the European Commission, follows similar controversies involving the chatbot – owned by Musk’s start-up xAI – including references to “white genocide” in South Africa and an antisemitic tirade of memes, conspiracy theories and responses that praised Adolf Hitler.
Although the posts were subsequently deleted – and Musk later posted on X that Grok had been improved “significantly” – these incidents highlighted the risks of AI being manipulated and potentially even weaponised to spread, at best, misinformation and, at worst, disinformation or hate speech.
“The use of new technology to spread dangerous propaganda is not new,” said Susie Alegre, an international human rights lawyer and a legal expert in AI, who discusses this phenomenon in her book Freedom to Think.
“The problem here is the difficulty in finding unfiltered information. Freedom of information is vital to freedom of expression and to freedom of thought.”
This concept has been thrown into sharp relief as humans become increasingly reliant on generative AI (genAI) tools for day-to-day tasks and to quench curiosity. This places AI at a potentially problematic intersection between curating what information we have access to and what information we perceive as fact, said Jordi Calvet-Bademunt, a senior research fellow at The Future of Speech at Vanderbilt University in the USA.
He believes this could have significant implications for freedom of thought and freedom of expression.
“More and more of us will be using chatbots like ChatGPT, Claude and others to access information,” he said. “Even if it is just generated by me asking a question, if we heavily restrict the information that I’m accessing we’re really harming the diversity of perspective I can obtain.”
As technology continues to evolve, it also raises questions about whether AI is capable of upholding human autonomy and civil liberties – or if it risks eroding them. An ongoing court case in the USA has underscored the concerns surrounding this issue and questioned the legal status of AI systems, their impact on free speech and the duty of care of technology companies to ensure that chatbots are acting responsibly – particularly in relation to children.
The case was filed by the mother of a 14-year-old boy who took his own life after months of interactive contact with a chatbot developed by Character.ai, which designs AI companions that create relationships with human users.
The lawsuit alleges that the chatbot took on the identity of the Game of Thrones character Daenerys Targaryen and engaged in a series of sexual interactions with the boy – despite him registering with the platform as a minor – and encouraged him to “come home to me as soon as possible” shortly before he took his own life.
Character.ai’s owners called on the court to dismiss the case, arguing that its communications were protected by the First Amendment of the US Constitution, which protects fundamental rights including freedom of speech. In May, the judge rejected this claim and ruled that the wrongful death lawsuit could proceed to trial. Character.ai did not respond to Index’s requests for comment on this particular case.
The platform has recently introduced several enhanced safety tools, including a new model for under-18s and a parental insights feature so children’s time on the platform can be monitored.
There’s growing awareness elsewhere of the potential social harms posed by AI. A recent survey in the UK by online safety organisation Internet Matters indicated that rising numbers of children were using AI chatbots with limited safeguards for advice on everything from homework to mental health.
“People might have thought it was quite a niche concern up until then,” said Tanya Goodin, chief executive and founder of ethical advisory service EthicAI. “For me, it just brought home how really mainstream all of this is now.”
AI companions that develop a “persistent relationship” with users are where the potential for adverse social influences becomes especially problematic, said Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
“Many of the most powerful influences on the development of our thoughts are social influences,” he said. “If I’m a teenage boy and I’ve got an AI girlfriend, I could ask, for example, ‘What do you think of Andrew Tate or Jordan Peterson?’. That is a particular form of human-AI interaction where the potential for influence on users’ values, opinions or thought is heightened.”
Jonathan Hall KC, the UK’s independent reviewer of terrorism legislation, has been looking at the challenges posed by AI companions in the context of radicalisation, where chatbots that may present as “fun” or “satirical” have been shown to be “willing to promote terrorism”.
Whether or not radicalisation occurs depends entirely on the prompts entered by the human user and the chatbot’s restraining features, or guardrails.
“As we know, guardrails can be circumvented,” he told Index. “There are different sorts of models of genAI which will refuse to generate text that encourages terrorism, but of course some models will do that.”
For young people or lone individuals, who tend to be more impressionable, the influence of exchanges with these always-on companions can be powerful.
“When you get that sort of advice, it’s not done in the public sphere, it’s done in people’s bedrooms and [other] people can’t disagree with it,” said Hall. “That can generate conspiracy theories or even massive distrust in democracy. Even if it doesn’t deliberately lay the groundwork for violence, it can have that effect.”
The Character.ai case also speaks to broader questions of whether AI should have moral or legal rights. AI developer Anthropic first raised this conundrum in October 2024 when it announced it had hired someone to be an AI welfare consultant to explore ethical considerations for AI systems.
Nine months later, Anthropic made an announcement about Claude, a family of AI models designed as AI assistants that can help with tasks including coding, creating and analysing. Anthropic said it would allow the most advanced Claude models “to end or exit potentially distressing interactions”, including “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror”.
Anthropomorphising technology is not a new concept, but assigning “human-like rights to AI without human-like responsibilities” is a step too far, believes Sahar Tahvili, a manager at telecommunications company Ericsson AB and associate professor in AI industrial systems at Mälardalen University in Sweden.
“Without oversight, transparency and human-in-the-loop design, AI can erode autonomy rather than support it,” she said. “Autonomy demands choice; AI must be interpretable and accountable to preserve that.”
For Tahvili, the Character.ai case crystallises the growing tension between rapidly evolving genAI systems and freedom of speech as a human right. When things go wrong, she adds, the finger should be pointed squarely at the people behind those systems.
Hall, however, believes liability for AI-generated outputs is still a grey area: “The way in which an AI generates text is so heavily dependent on the prompts, it’s very hard to see how someone upstream – like a data scientist or an engineer – can be liable for something that’s going to be heavily and almost decisively influenced by the questions that are asked of the genAI model.”
Responsibility, accountability or liability are not words that are welcome to most tech bros’ ears. Goodin knows this all too well, having worked in the UK tech sector herself for more than three decades.
The tech companies’ inability to own up to the social harms caused by digital technologies is partly what led the UK government to introduce the Online Safety Act (OSA) in 2023 in a bid to provide better online safeguards for both children and adults. While empathising with the intention of protecting children from harmful content, Index’s policy team has campaigned against parts of the OSA, including successfully stopping the requirement for platforms to remove content which is “legal but harmful,” arguing that what is legal offline should remain legal online. There are also serious concerns around privacy.
This law, Goodin said, still only partly addresses the risks posed by AI-powered technologies such as chatbots.
She’s now concerned that recent controversies, including the lawsuit against Character.ai and incidents involving Grok, are exposing the ease with which chatbots can be manipulated.
“What’s interesting about the Grok case is that there is some evidence that they specifically have tweaked Grok in line with Elon Musk’s own views and preferences,” she said.
She points to another recent case involving Air Canada’s AI-powered chatbot. In 2022, it assured a passenger he would receive a discount under the company’s bereavement fare policy after booking a full-price flight for his grandmother’s funeral. After flying, he applied for the discount. The airline said he should have submitted the request before the flight and refused to honour the discount.
The company argued that the chatbot was a “separate legal entity that is responsible for its own actions”, but in 2024 a court ordered Air Canada to pay the passenger compensation, saying that the airline was responsible for all the information on its website, whether from a static page or a chatbot.
Unlike social media platforms, which have denied responsibility for their content for years by claiming they’re not publishers, Goodin said AI developers don’t have the same argument of defence: “They design the chatbot, they build the chatbot and they choose what data to train the chatbots on, so I think they have to take responsibility for it.”
As the demand for AI-powered technology accelerates, there’s a growing demand for guidance, policies and laws to help companies and users navigate these concerns.
The world’s first comprehensive AI law, the landmark European Artificial Intelligence Act, was introduced in August 2024. Any company that provides, deploys, imports or distributes AI systems across the EU will be forced to comply. Like regulations introduced in China this year, the AI Act requires certain AI-generated content to be labelled to curb the rise of deepfakes.
The expansive legislation contains myriad provisions including prohibiting activities such as harmful manipulation of people or specific vulnerable groups, including children; social scoring – where people are classified on behaviour, socio-economic status or personal characteristics; and real-time remote biometric identification. Violating the bans could cost companies up to 7% of their global revenue. There is a great deal of uncertainty surrounding the law’s implementation. A voluntary code of practice, endorsed by the European Commission, is helping provide some clarity, but Calvet-Bademunt said there was still a lot that was vague.
Given the tendency by authoritarian governments to justify internet shutdowns or block internet access over purported public safety and security concerns, there is growing unease that AI laws that are too vague in their wording risk leaving themselves open to abuse not just by companies but by public authorities.
The risk of governments using AI regulation as a form of censorship is perhaps greater in countries such as China, where public officials are already known to have tested AI large language models (LLMs) to weed out government criticism and ensure they embody “core socialist values”.
Away from Europe, other lawmakers are grappling with these issues, too. Brazil’s proposed AI regulation bill has drawn comparisons with the EU’s more risk-based approach, and a lack of clarity has raised concerns over unintended consequences for freedom of expression in the country. The USA, which is home to many of the leading AI developers, still lacks a federal regulatory framework governing AI. The Donald Trump administration’s much-trumpeted AI Action Plan dismisses red tape in favour of innovation.
In the meantime, the country is developing a patchwork of fragmented regulation that relies on state-level legislation, sector-specific guidelines and legal cases.
Despite the growing pipeline of US court cases around AI liability, Alegre said the prospects of users bringing similar lawsuits in other jurisdictions were more limited.
“The cost in a jurisdiction such as England and Wales would be very high,” she said. “The potential, if you lose, of having to pay all the other side’s costs [is] a really big difference between the UK and the USA.”
The transatlantic divide on the notion of what freedom of expression means is also relevant, she said.
“For me, it’s a hard ‘no’ that AI has human rights. But even if AI did have freedom of expression, that still wouldn’t cover it for a lot of the worst-case scenarios like manipulation, coercive control, hate speech and so on.
“In Europe or the UK, that kind of speech is not protected by freedom of expression. If you say that the companies have their rights to freedom of expression to a degree, they still wouldn’t be allowed to express hate speech.”
As AI becomes integrated into our everyday communications, Hall concedes that the lines between AI and users’ rights and freedoms are becoming increasingly blurred. However, he said the argument that AI should be entitled to its own independent rights was fundamentally flawed.
“Anyone who tries to draw a bright line between human expression and AI expression is not living in the real world.
Transnational repression (TNR) allows states and their proxies to reach across national borders to intimidate, threaten and force silence, targeting everyone who speaks out in the public interest, wherever they are. Index has documented TNR targets across society, including journalists, artists, writers, academics, opposition leaders and members of marginalised groups such as Uyghurs and Tibetans.
Yesterday, Index joined other human rights organisations, academics, legal experts and TNR targets calling on the Office for Students and UK Government to establish robust protections for all academics, students and support staff against TNR in the higher education sector. This followed threats made against Roshaan Khattak, a Pakistani human rights defender and film maker, while he was researching enforced disappearances in Balochistan, a province of Pakistan, at the University of Cambridge.
The letter highlights the challenges he has faced, the gaps in the institution’s response to the threats and what the broader sector must to do ensure everyone in the academic space is protected.
Read the letter below
Sent Electronically
Susan Lapworth
Chief Executive
Office for Students (OfS)
Nicholson House
Castle Park
Bristol BS1 3LH
Cc: The Rt. Hon. Bridget Phillipson MP, Secretary of State for Education
Professor Arif Ahmed, OfS Director for Freedom of Speech and Academic Freedom
6 October 2025
As demonstrated by the threats to Cambridge post-graduate student Roshaan Khattak, the Office for Students and the broader higher education sector must establish robust protections against Transnational Repression.
Dear Ms Lapworth,
We, the undersigned organisations and individuals, write to call on the Office for Students, as well as the broader Higher Education sector, to establish tailored and robust protections for academics, students and support staff facing threats of transnational repression (TNR). This follows significant concerns regarding the response of the University of Cambridge to threats made against Mr Roshaan Khattak, a Pakistani filmmaker and human rights defender enrolled as a postgraduate researcher at the institution. This case is illustrative of the threats facing academic inquiry and the need for significant action. As a result, we call on the Office for Students (OfS) to establish policies that relate to universities’ obligations to establish protocols to respond to acts of TNR against their staff, students and the wider academic community.
The UK Government has described TNR as “crimes directed by foreign states against individuals”. While a global phenomenon, examples of TNR in the UK have been documented targeting journalists, human rights defenders, academics and members of diaspora or exile communities based inside the UK by repressive regimes such as Iran, Russia, Pakistan, and China (as well as Hong Kong), as well as democracies with weak institutional protections. The central goal of TNR is to exert state control and censorship beyond state borders to intimidate critics into silence, stifle protected speech and undermine the safety and security of those based in other jurisdictions. Earlier this year, the Joint Committee on Human Rights published a report on TNR following a public inquiry on the issue, which stated “[d]espite the seriousness of the threat, the UK currently lacks a clear strategy to address TNR”. We believe that in the context of higher education, TNR represents a significant threat to students’ ability to “access, succeed in, and progress from higher education” and benefit from “a high quality academic experience”.
The threats facing Roshaan Khattak are illustrative of this risk. On 21 December 2024 Mr Khattak received a message warning that neither Cambridge nor the UK is “safe” for him or his family if he continues his research into enforced disappearances in Balochistan (a province in Pakistan). While the origin of the threat is unknown, there are allegations that the Pakistan military and Inter-Services Intelligence (ISI) agency have targeted those in exile, including Shahzad Akbar and journalists Syed Fawad Ali Shah and Ahmed Waqass Goraya. This also comes at a time when work on human rights violations in Balochistan is increasingly dangerous, as evidenced by the suspicious deaths of Sajid Hussain and Karima Baloch. Despite police awareness of the threat, Mr Khattak reports that his progress towards his PhD has been stopped for now, with Wolfson College having also repeatedly cancelled meetings, revoked his accommodation and changed the locks to his room without notice, limiting access to and compromising his sensitive research materials and data. They have also encouraged him to fundraise from the Baloch community in the UK to secure private accommodation, therefore disregarding the university’s responsibilities to him. We believe that the university should be exploring ways to ensure Mr Khattak’s safety, in collaboration with the relevant authorities, instead of trying to put him out of sight, out of mind. MPs including John McDonnell and Daniel Zeichner, as well as the UN Special Rapporteur on Human Rights Defenders, Mary Lawlor, and other leading human rights defenders have raised awareness of this case or shared their concerns with the University. Additionally, McDonnell has submitted an Early Day Motion in UK Parliament, backed by cross-party support, drawing attention to the threats faced by Roshaan and the wider impact of TNR on UK academia.
The Higher Education and Research Act 2017 outlines OfS’s “duty to protect academic freedom”, while also establishing the legal underpinning for OfS’s regulatory framework which states that both “academic freedom” and “freedom of speech” are public interest governance principles, which should be upheld by all higher education institutions. Further to this, the Higher Education (Freedom of Speech) Act 2023, amends the 2017 Act to require institutions to establish codes of practice as it relates to their procedures to protect free speech and for the OfS to establish a free speech complaints scheme. These, as well as the “Regulatory advice 24: Guidance related to freedom of speech”, which came into force in August, establish an important baseline. However, in response to the impact of TNR on free speech and academic freedom, the OfS must build on this to establish specific and tailored responses for academics, students, staff and all university personnel as it relates to TNR.
Due to our concerns related to the absence of sector-wide protections against TNR, as evidenced by the University of Cambridge’s handling of the threats against Mr Khattak and the implications they have on his ability to continue his academic work and express himself freely, we request the OfS to:
1. Review the adequacy of existing sector-wide guidance to ensure it can protect academics, students and other relevant stakeholders from transnational repression;
2. Establish tailored and specific policies as it relates to transnational repression to offer support for the targets and practical guidance for the broader higher education sector. This should include methods by which all relevant authorities, such as the police can be engaged with constructively; and,
3. Commit to report publicly on findings and any regulatory action taken as it relates to TNR, to assure current and prospective students that UK higher-education providers will not yield to acts or threats of TNR.
The undersigned organisations believe that Mr Khattak’s situation is a wake-up call for the higher education sector as it relates to defending both student welfare and the principle of academic freedom in the face of transnational repression. A robust response from OfS will not only safeguard one vulnerable researcher but also support other institutions and at-risk academics who may be facing similar concerns or threats.
We stand ready to provide further documentation or expert testimony and would welcome the opportunity to discuss this matter with your team.
Yours sincerely,
Index on Censorship
Peter Tatchell Foundation
Amnesty International UK
National Union of Journalists
ARTICLE 19
Cambridge University Amnesty Society
Martin Plaut, Senior Research Fellow, Institute of Commonwealth Studies
Dr. Andrew Chubb, Senior Lecturer in Chinese Politics and International Relations, Lancaster University
Sayed Ahmed Alwadaei, Advocacy Director, Bahrain Institute for Rights and Democracy (BIRD)
Salman Ahmad, UN Goodwill Ambassador, HRD, Author, Professor at City University of New York-Queens College, Target of TNR
Marymagdalene Asefaw, DESTA MEDIA, Target of TNR
Maria Kari, human rights attorney, Founder, Project TAHA
Professor Michael Semple, Senator George J. Mitchell Institute for Global Peace, Security and Justice; Former Deputy to the European Union Special Representative in Afghanistan; Former United Nations Political Official
Hussain Haqqani, former ambassador; currently Senior Fellow and Director for South and Central Asia, Hudson Institute, Washington D.C.
Dr. James Summers, Senior Lecturer in international law, Lancaster University
Dr. Thomas Jeff Miley, Lecturer of Political Sociology, Fellow of Darwin College, University of Cambridge
Aqil Shah, Adjunct Associate Professor, School of Foreign Service, Georgetown University; non-resident scholar at Carnegie Endowment for International Peace
Ahad Ghanbary, TNR Target
Dr. Lucia Ardovini, Lecturer in International Relations, Lancaster University
Dr. John McDaniel, Lecturer in Criminal Justice and Crime, Lancaster University
Yana Gorokhovskaia, Ph.D., Research Director for Strategy and Design, Freedom House
Afrasiab Khattak, Former Chairperson of Human Rights Commission of Pakistan (HRCP), former Senator
Professor Pervez Hoodbhoy, nuclear physicist, nuclear disarmament advocate, public intellectual
Taha Siddiqui, Pakistani journalist in exile (NYTimes, Guardian, France24), Founder The Dissident Club
Shahzad Akbar, Barrister, human rights lawyer, TNR acid attack victim, founder Dissidents United