12 Dec 2025 | Americas, Brazil, News
Brazilian left-wing influencer Thiago Torres, best known as Chavoso da USP (roughly translated as the University of São Paulo’s swaggy chav), has faced increasing political persecution in the last months. This reached international levels last month when Thiago’s main Instagram profile, with more than one million followers, was taken down by Meta.
Thiago then started using an old backup Instagram account with 385,000 followers, which was also taken down after allegations that it had been created to circumvent the previous block. Arbitrarily and without possibility of appeal, Meta blocked all access to his accounts and is set to permanently delete their content. A warning on Instagram said that the account “does not follow Community Standards” although the company did not specify which specific rules had been breached. Even after a preliminary injunction was issued on the morning of 20 November that forced Meta to return Thiago’s main account under threat of a fine, three other accounts were taken down later that same evening.
By maintaining the block on the influencer, Meta is involved in yet another case of big tech insubordination to Brazilian justice according to politicians. Federal Congresswoman Sâmia Bomfim, from PSOL (Freedom and Socialism Party), classified the event as a “direct attack on freedom of speech and the work of those who denounce injustices within Brazil.” Thiago sees it as “an offensive against progressive, mainly radical, left-wing voices”.
This is not the first time Meta has taken down accounts with large numbers of followers linked to the Brazilian left. In August this year, historian and influencer Jones Manoel, former candidate for governor of Pernambuco with the PCB (Brazilian Communist Party) and the Brazilian influencer with the most growth on the platform since June, was arbitrarily banned from Instagram. In October, activist and comedian Tiago Santineli also had his 850,000 followers account blocked, following online comments about the death of Charlie Kirk.
Since 9 December 2025, members of parliament from PSOL, PT (Worker’s Party), and left-wing news outlets have reported that their profiles “don’t appear in searches, can’t be tagged, and [that] their reach has plummeted in an orchestrated manner”, according to Federal Congresswoman Fernanda Melchionna. This is known as shadow-banning.
The bans follow a dispute between big tech companies and the left wing government of Luiz Inácio Lula da Silva which dates from January 2025, when Brazil’s Attorney General’s Office sent an extrajudicial notification to Meta because of the company’s decision to stop using independent fact-checkers. The concern was that this would further exacerbate the problem of “fake news”, which became prevalent in the 2018 and 2022 election processes, particularly on the part of the Brazilian right wing. A major dispute between the Brazilian judiciary and Elon Musk’s X also took place last year, resulting in the social network being blocked in the country until Musk complied with court orders.
The regulation of big tech companies – largely similar to what the EU has instigated – is considered by the current government as a matter of national sovereignty. In July, President Trump sent a letter to Brazil’s president, Lula, imposing a massive 50% tariff that rendered the export of a range of Brazilian products to the USA unfeasible. According to the letter, the measure came as retaliation for the sanctions against big tech and in support of former president Jair Bolsonaro, a representative of the Brazilian far right and ally of Trump who was convicted for attempted coup d’état.
In his speech at the UN General Assembly in September, President Lula said that “even under unprecedented attack, Brazil chose to resist and defend its democracy. There is no justification for the unilateral and arbitrary measures against our institutions and our economy. The aggression against the independence of the judiciary is unacceptable.”
It is not only in Brazil that US intervention in favour of big tech been felt. Back in January, Meta’s CEO Mark Zuckerberg clearly stated on the Joe Rogan Experience Podcast that “the US government has a role in basically defending [big tech] abroad”.
In the same week that Brazil hosted COP30 and witnessed the preventive arrest of Bolsonaro, the suspension of five accounts belonging to a left-wing influencer shows that big tech might also have a role in defending the US government’s interests abroad in Brazil.
Researchers like the Brazilian academic Walter Lippold denounce what they call “digital colonialism”, the interconnection between imperialist interests and big tech. To Brazilian sociologist Sérgio Amadeu, “online social networks and platforms controlled by big tech companies are geopolitical structures increasingly aligned with the far right.” In June, at seminars held by Bolsonaro’s right-wing Liberal party (PL), executives from Meta gave workshops teaching how to use AI and achieve greater reach on the platform.
Born and raised in Brasilândia, an outlying neighbourhood of São Paulo, Thiago Torres first rose to prominence as a social sciences student at the University of São Paulo.
Ranked many times as the best university in Latin America, the University of São Paulo subscribes to a national public education project aimed at social development. Despite this, USP remains elitist in the social and racial makeup of both its faculty and students. Thiago spoke about the way this composition shaped the production of knowledge within USP, and used his platform to share social theory with a wider public.
Now graduated and a teacher, Thiago has become known for denouncing cases of political corruption and police violence. Overtly anti-capitalist and anti-imperialist, it’s not surprising that his head is wanted by public officials and companies who benefit from the country’s social division.
In August this year, Thiago was called to testify in the controversial CPI dos Pancadões, a parliamentary commission inquiring into street funk parties. Under the pretext that they disturb public order, it is common for the military police to raid pancadões, using extreme violence and murdering the young people present, many of whom are from racial minorities and come from lower social strata.
Thiago’s account dedicated to police violence, @fim.da.pm (“End the Military Police”), is among those blocked by Meta. The company had until 28 November to return the influencer’s main account, but this didn’t happen.
“Instagram will face a daily fine for each day it fails to comply [with the judiciary decision]”, Thiago explained. “But it’s a relatively small fine for them, so it’s possible they might disregard the court order.” Unfortunately, this seems to be the case.
20 Sep 2024 | News, United Kingdom
In August 2021, when the Taliban took over Kabul and home searches became ubiquitous, women started to delete anything they thought could get them in trouble. Books were burned, qualifications were shredded, laptops were smashed. But for 21 members of a women’s creative writing group, a lifeline remained: their WhatsApp group. Over the next year they would use this forum to share news with one another (a story that has since been chronicled in the recently published book My Dear Kabul, which was published by Coronet and is an Untold Narratives project, a development programme for marginalised writers). Doing so through WhatsApp was not incidental. Instead the app’s use of end-to-end-encryption provided a strong level of protection. The only way the Taliban would know what they were saying was if they found their phones, seized them, forced them to hand over passwords and went into their accounts. They could not otherwise read their messages.
End-to-end encryption is not sexy. Nor do those four words sound especially interesting. It’s easy to switch off when a conversation about it starts. But as this anecdote shows it’s vitally important. Another story we recently heard, also from Afghanistan: a man hid from the Taliban in a cave and used WhatsApp to call for help. Through it, safe passage to Pakistan was arranged.
It’s not just in Afghanistan where end-to-end encryption is essential. At Index we wouldn’t be able to do our work without it. We use encrypted apps to message between our UK-based staff and to keep in touch with our network of correspondents around the world, from Iran to Hong Kong. We use it to keep ourselves safe and we use it to keep others safe. Our responsibility for them is made manifest by our commitment to keep our communication and their data secure.
Beyond these safety concerns we know end-to-end encryption is important for other reasons: It’s important because we share many personal details online, from who we are dating and who we vote for to when our passport expires, what our bank details are and even our online passwords. In the wrong hands these details are very damaging. It’s important too because privacy is essential both in its own right and as a guarantor of our other fundamental freedoms. Our online messages shouldn’t be open to all, much as our phone lines shouldn’t be tapped. Human rights defenders, journalists, activists and MPs message via platforms like Signal and WhatsApp for their work, as do people more broadly who are unsettled by the principle of not having privacy.
Fortunately, today accessible, affordable and easy-to-use encryption is everywhere. The problem is its future looks uncertain.
Last October, the Online Safety Act was passed in the UK, a sprawling piece of legislation that puts the onus on social media firms and search engines to protect children from harmful content online. It’s due to come into force in the second half of 2025. In it, Section 121 gives Ofcom powers to require technology companies to “use accredited technology” that could undermine encryption. At the time of the Act’s passage, the government made assurances this would not happen but comments from senior political figures like Sadiq Khan, who believe amendments to the acts are needed, have done little to reassure people.
It’s not just UK politicians who are calling for a “back door”.
“Until recently, traditional phone tapping gave us information about serious crime and terrorism. Today, people use Telegram, WhatsApp, Signal, Facebook, etc. (…) These are encrypted messaging systems (…) We need to be able to negotiate what you call a ‘back door’ with these companies. We need to be able to say, ‘Mr. Whatsapp, Mr. Telegram, I suspect that Mr. X may be about to do something, give me his conversations,’” said French Interior Minister Gérald Darmanin last year.
Over the last few years police across Europe, led by French, Belgium and Dutch forces, have breached the encryption of users on Sky ECC and EncroChat too. Many criminals were arrested on the back of these hacking operations, which were hailed a success by law enforcement. That may be the case. It’s just that people who were not involved in any criminal activity would also have had their messages intercepted. While on those occasions public outcry was muted, it won’t be if more commonly used tools such as WhatsApp or Signal are made vulnerable.
Back to the UK, if encryption is broken it would be a disaster. Not only would companies like Signal leave our shores, other nations would likely follow suit.
For this reason we’re pleased to announce the launch of a new Index campaign highlighting why encryption is crucial. WhatsApp, the messaging app, have kindly given us a grant to support the work. As with any grant, the grantee has no influence over our policy positions or our work (and we will continue to report critically on Meta, WhatsApp’s parent company, as we would any other entity).
We’re excited to get stuck into the work. We’ll be talking to MPs, lawyers, people at Ofcom and others both inside and outside the UK. With a new raft of MPs here and with conversations about social media very much in the spotlight everywhere it’s a crucial moment to make the case for encryption loud and clear, both publicly and, if we so chose, in a private, encrypted forum.
17 Oct 2023 | News
More than two years ago, as Myanmar’s coup unfolded, open-source content provided unique insight into what was happening in the country and the battlelines that were soon to emerge. Live from a roundabout in the capital of Naypyidaw, exercise instructor Khing Hnin Wai unwittingly captured and disseminated live footage of the coup taking place via Facebook. For a brief period, images of Khing Hnin Wai dancing in front of a military convoy became symbolic of Myanmar’s struggle to maintain democracy.
Here at Myanmar Witness, we use user-generated, openly available content like this to identify, verify and report on events across Myanmar involving abuses of human rights and contraventions of international law. We let the evidence speak for itself when we publish the results of our investigations, collaborate with media and share evidence with justice and accountability mechanisms.
Content we examine is rarely as innocuous as Khing Hnin Wai‘s video. Since our inception as one of the witness projects at the Centre for Information Resilience, we have used imagery from social media, geospatial providers, and other forms of ‘open’ sources to contribute towards accountability for crimes being committed. These include horrific beheadings, the widespread intentional use of fire, the impact of the conflict on sites with special protections, and at a scale and sophistication beyond what we see in our other witness projects — hate speech and doxxing.
Doxxing exposes the private information of individuals, such as addresses, phone numbers and more, without their consent. In Myanmar it is done with the intent to intimidate, spread fear and suppress voices. Doxxing has become the digital manifestation of the real-world violence faced by thousands of people in Myanmar everyday. Our findings have repeatedly shown that in Myanmar, the internet is being used as a weapon – and this is steeped in history. Facebook was widely used as a vehicle for the promotion of hate speech and incitement to violence during the Rohingya crisis, which led to the social media company admitting failings in the way it handled content on its platform.
In January this year, following an investigation into online abuse against Burmese women, we released our Digital Battlegrounds report, which showed how the situation is worsening. Its findings were damning: Facebook and Telegram were hosting politically-motivated abuse targeted at Burmese women. Abuse included real-world threats of violence, gendered hate speech and sexually violent commentary. The source of this content was clear – pro-Myanmar Military accounts and users.
To their credit, and in response to Myanmar Witness and BBC outreach, both Meta and Telegram removed a large amount of content which violated their respective terms of service. However, in the case of Telegram, soon after some accounts were removed or suspended, new ones emerged to take their place. Identifying online abusers and their violent content continues to be painstaking and tedious work.
The online information environment in Myanmar has been, and continues to be, part of the conflict. In the wake of an airstrike by the Myanmar Air Force against Pa Zi Gyi village in April 2023, the darkness of Myanmar’s digital conflict resurfaced. With some media reporting more than 160 dead it was one of the worst airstrikes seen in Myanmar and led to an outpouring of domestic and international sympathy and condemnation.
In Myanmar, a ‘black profile’ campaign emerged online, mourning the victims of the attack. Today’s report by Myanmar Witness investigators shows just how the military regime retaliated with a brutal crackdown — online and offline — against those who dared to show sympathy. For engaging in non-violent online protest, individuals were met with arrests, threats and physical violence. Both their digital and real-world voices were silenced.
Pro-junta groups doxxed those who protested digitally as online sympathy grew in the wake of the airstrike. We found a link: at least 11 of the 20 individuals who were doxxed were then arrested for their activities on Facebook within days of being exposed by pro-junta Telegram channels. They were among a total of 69 people who were arrested within three weeks of the airstrike. In the vast majority of cases, social media activity was the stated reason for their arrest by the authorities.
Some months following their arrest, five individuals who were influential and well-known — a former journalist and several celebrities — were released. Multiple pro-junta Telegram channels hinted at their release before it occurred, indicating information sharing, if not coordination, between these channels and the military authorities. The fate of the more than 60 others detained in the same period remains unclear. Our research only scratches the surface of the vicious digital and physical conflict in Myanmar, and there are no signs of it abating.
While those who incite and intimidate online are ultimately responsible, inadequate moderation of content by social media platforms is part of the problem, as is the protracted war in Myanmar which recycles and reinforces the online violence. While others go online to perpetuate conflict, we at Myanmar Witness will continue to use digital content to identify, verify and report on the conflict, and to ensure that those at risk of being silenced have their voices heard.
8 Dec 2022 | Artistic Freedom, News, United Kingdom
On a housing estate, somewhere in north-west London, a dispute said to be between rival groups of young men, apparently rages on. From this quagmire of social deprivation emerges Chinx (OS) who, released from an eight-year custodial sentence at the four-year mark, starts dropping bars like his very life depended on it. And, in a way it does. Because for boys like Chinx, young, black and poor, there is only one way out and that is to become the next Stormzy. Only, two behemoths stand in his way: the Metropolitan Police and their apparent “side man” Meta, parent company of Facebook and Instagram.
In January 2022, Chinx posted a video clip of a drill music track called Secrets Not Safe. Following a request by the Metropolitan Police arguing that the post could lead to retaliatory gang-based violence , Meta removed the post and Chinx’s Instagram account was deleted.
Meta’s decision has now been challenged by the Oversight Board, a quasi-independent adjudicator conceived to police the online giant’s application of its own policies but funded by the company.
The Board recently condemned the company’s decision to remove Chinx’s post and delete his account as not complying with Meta’s own stated values and with wider human rights considerations.
As part of its review of Meta’s decision, the Board made a Freedom of Information Act request to the Met over its requests to remove content from various online platforms. Whilst a good proportion of their responses to the request were unhelpful bordering on obstructive, what it did disclose was troubling.
In the year to the end of May 2022, the Met asked online platforms, including Meta, to remove 286 pieces of content. Every single one of those requests related to drill music. No other music genre was represented. Some 255 of the Met’s requests resulted in the removal of content, a success rate of over 90%.
The decision makes for illuminating, if worrying, reading when one considers the potential chilling impact Meta’s actions may have on the freedom of expression of an already suppressed, marginalised and some would argue, over-policed section of our community. Four areas of concern emerge.
Law enforcement access to online platforms
Instagram, in common with other applications, has reporting tools available to all users to make complaints. Whilst it may be that law enforcement organisations use such tools, these organisations also have at their disposal what amounts to direct access to these online platform’s internal complaints procedures. When law enforcement makes a request to take content down, Meta deals with such a request “at escalation”. This triggers a process of investigation by Meta’s internal specialist teams who investigate the complaint. Investigation includes analysis of the content by Meta to decipher whether there is a “veiled threat”.
This case demonstrates a worrying pattern in my view; namely the level of privileged access that law enforcement has to Meta’s internal enforcement teams, as evidenced by correspondence the Board saw in this case.
Lack of evidence
What became clear during the exposition of facts by the Board was that despite the apparent need for a causal link between the impugned content and any alleged “veiled threat” or “threat of violence” law enforcement advanced no evidence in support of their complaint. In the light of the fact, as all parties appeared to accept, that this content itself was not unlawful, this is shocking.
On the face of it then, Meta has a system allowing for fast-tracked, direct access to their complaints procedure which may result in the removal of content, without any cogent evidence to support a claim that the content would lead to real life violence or the threat thereof.
This omission is particularly stark as, as in this case, the violence alluded to in the lyrics took place approximately five years prior to the uploading of the clip. This five-year gap, as the Board commented, made it all the more important for real and cogent evidence to be cited in support of removal of the content. We ought to remind ourselves here that the Board found that in this case there was no evidence of a threat, veiled or otherwise, of real-life violence.
Lack of appeal
Meta’s internal systems dictate that if a complaint is taken “at escalation” – as all government requests to take down content are, and this includes requests made by the Met Police – this means there is no internal right of appeal for the user. Chinx (OS) and the other accounts affected by this decision had no right to appeal the decision with Meta nor with the Oversight Board. The result is that a decision that, in some cases, may result in the loss of an income stream as well as an erosion of the right to express oneself freely, may go unchallenged by the user. In fact, as Chinx (OS) revealed during an interview with BBC Radio 4’s World at One programme, he was not made aware at any point during the process why his account had been deleted and the content removed.
The Board itself commented that: “The way this relationship works for escalation-only policies, as in this case, brings into question Meta’s ability to independently assess government actors’ conclusions that lack detailed evidence.”
Disproportionality
Each of the three shortcomings above revealed by the Board within Meta’s procedures are worrying enough; but, coupled with the disproportionate impact this system has upon black males (the main authors and consumers of this content) it veers dangerously close to systemic racism.
The findings of the Oversight Board’s FOI request on the Met’s activities in relation to online platforms clearly back this up.
The Digital Rights Foundation argues that while some portray drill music as a rallying call for gang violence, it in fact serves as a medium for youth, in particular black and brown youth, to express their discontent with a system that perpetuates discrimination and exclusion.
An insidious and backdoor form of policing
The cumulative effect of Meta’s actions arguably amounts to an insidious and unlegislated form of policing. Without the glare of public scrutiny, with no transparency and no tribunal to test or comment on the lack of evidence, the Met have succeeded in securing punishment (removal of content could be argued to be a punishment given that it may lead to loss of income) through the back door against content that was not, in and of itself unlawful.
As the Board pointed out in their decision, for individuals in minority or marginalised groups, the risk of cultural bias against their content is especially acute. Art, the Board noted, is a particularly important and powerful expression of “voice”, especially for people from marginalised groups creating art informed by their experiences. Drill music offers young people, and particularly young black people, a means of creative expression. As the UN Special Rapporteur in the field of cultural rights has stated, “…representations of the real must not be confused with the real… Hence, artists should be able to explore the darker side of humanity, and to represent crimes… without being accused of promoting these.”
The right to express yourself freely, even if what you say may offend sections of our community, is one of those areas that truly tests our commitment to this human right.