Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
On a housing estate, somewhere in north-west London, a dispute said to be between rival groups of young men, apparently rages on. From this quagmire of social deprivation emerges Chinx (OS) who, released from an eight-year custodial sentence at the four-year mark, starts dropping bars like his very life depended on it. And, in a way it does. Because for boys like Chinx, young, black and poor, there is only one way out and that is to become the next Stormzy. Only, two behemoths stand in his way: the Metropolitan Police and their apparent “side man” Meta, parent company of Facebook and Instagram.
In January 2022, Chinx posted a video clip of a drill music track called Secrets Not Safe. Following a request by the Metropolitan Police arguing that the post could lead to retaliatory gang-based violence , Meta removed the post and Chinx’s Instagram account was deleted.
Meta’s decision has now been challenged by the Oversight Board, a quasi-independent adjudicator conceived to police the online giant’s application of its own policies but funded by the company.
The Board recently condemned the company’s decision to remove Chinx’s post and delete his account as not complying with Meta’s own stated values and with wider human rights considerations.
As part of its review of Meta’s decision, the Board made a Freedom of Information Act request to the Met over its requests to remove content from various online platforms. Whilst a good proportion of their responses to the request were unhelpful bordering on obstructive, what it did disclose was troubling.
In the year to the end of May 2022, the Met asked online platforms, including Meta, to remove 286 pieces of content. Every single one of those requests related to drill music. No other music genre was represented. Some 255 of the Met’s requests resulted in the removal of content, a success rate of over 90%.
The decision makes for illuminating, if worrying, reading when one considers the potential chilling impact Meta’s actions may have on the freedom of expression of an already suppressed, marginalised and some would argue, over-policed section of our community. Four areas of concern emerge.
Law enforcement access to online platforms
Instagram, in common with other applications, has reporting tools available to all users to make complaints. Whilst it may be that law enforcement organisations use such tools, these organisations also have at their disposal what amounts to direct access to these online platform’s internal complaints procedures. When law enforcement makes a request to take content down, Meta deals with such a request “at escalation”. This triggers a process of investigation by Meta’s internal specialist teams who investigate the complaint. Investigation includes analysis of the content by Meta to decipher whether there is a “veiled threat”.
This case demonstrates a worrying pattern in my view; namely the level of privileged access that law enforcement has to Meta’s internal enforcement teams, as evidenced by correspondence the Board saw in this case.
Lack of evidence
What became clear during the exposition of facts by the Board was that despite the apparent need for a causal link between the impugned content and any alleged “veiled threat” or “threat of violence” law enforcement advanced no evidence in support of their complaint. In the light of the fact, as all parties appeared to accept, that this content itself was not unlawful, this is shocking.
On the face of it then, Meta has a system allowing for fast-tracked, direct access to their complaints procedure which may result in the removal of content, without any cogent evidence to support a claim that the content would lead to real life violence or the threat thereof.
This omission is particularly stark as, as in this case, the violence alluded to in the lyrics took place approximately five years prior to the uploading of the clip. This five-year gap, as the Board commented, made it all the more important for real and cogent evidence to be cited in support of removal of the content. We ought to remind ourselves here that the Board found that in this case there was no evidence of a threat, veiled or otherwise, of real-life violence.
Lack of appeal
Meta’s internal systems dictate that if a complaint is taken “at escalation” – as all government requests to take down content are, and this includes requests made by the Met Police – this means there is no internal right of appeal for the user. Chinx (OS) and the other accounts affected by this decision had no right to appeal the decision with Meta nor with the Oversight Board. The result is that a decision that, in some cases, may result in the loss of an income stream as well as an erosion of the right to express oneself freely, may go unchallenged by the user. In fact, as Chinx (OS) revealed during an interview with BBC Radio 4’s World at One programme, he was not made aware at any point during the process why his account had been deleted and the content removed.
The Board itself commented that: “The way this relationship works for escalation-only policies, as in this case, brings into question Meta’s ability to independently assess government actors’ conclusions that lack detailed evidence.”
Disproportionality
Each of the three shortcomings above revealed by the Board within Meta’s procedures are worrying enough; but, coupled with the disproportionate impact this system has upon black males (the main authors and consumers of this content) it veers dangerously close to systemic racism.
The findings of the Oversight Board’s FOI request on the Met’s activities in relation to online platforms clearly back this up.
The Digital Rights Foundation argues that while some portray drill music as a rallying call for gang violence, it in fact serves as a medium for youth, in particular black and brown youth, to express their discontent with a system that perpetuates discrimination and exclusion.
An insidious and backdoor form of policing
The cumulative effect of Meta’s actions arguably amounts to an insidious and unlegislated form of policing. Without the glare of public scrutiny, with no transparency and no tribunal to test or comment on the lack of evidence, the Met have succeeded in securing punishment (removal of content could be argued to be a punishment given that it may lead to loss of income) through the back door against content that was not, in and of itself unlawful.
As the Board pointed out in their decision, for individuals in minority or marginalised groups, the risk of cultural bias against their content is especially acute. Art, the Board noted, is a particularly important and powerful expression of “voice”, especially for people from marginalised groups creating art informed by their experiences. Drill music offers young people, and particularly young black people, a means of creative expression. As the UN Special Rapporteur in the field of cultural rights has stated, “…representations of the real must not be confused with the real… Hence, artists should be able to explore the darker side of humanity, and to represent crimes… without being accused of promoting these.”
The right to express yourself freely, even if what you say may offend sections of our community, is one of those areas that truly tests our commitment to this human right.
[vc_row][vc_column][vc_column_text]
The New York Times is blocked in China.
Last month, China’s Ministry of Industry and Information Technology unveiled the country’s a new 14-month campaign to tighten control over the internet. The Chinese government is specifically concerned about virtual private networks, which punch holes through the country’s so-called “Great Firewall”. Without the VPNs, China’s internet users are unable to browse some of the world’s largest web sites. So the campaign made big news around the world.
But Charlie Smith of the 2016 Index on Censorship Digial Activism Award-winning GreatFire, an anonymous collective fighting Chinese internet censorship, told us that the VPN campaign is “actually kind of being mis-reported by the press, in general. It’s not as big a deal as it is being made out to be. We’d make a lot of noise if it was a big deal.”
Here are just six sites that are regularly blocked by China’s Great Firewall:
YouTube was first blocked in March of 2008 during riots in Tibet and has been blocked several times since, including on the 25th anniversary of the Tiananmen Square protests in 2014. At the time of the Tibetan riots, much of China’s population speculated that the YouTube ban was an attempt by the government to filter access to footage that a Tibetan exile group had released.
It’s typical for China’s internet censors to go into overdrive during politically sensitive events and/or time periods, which is why it doesn’t come as a surprise that Instagram was blocked in 2014 after pro-democracy protests in Hong Kong. To some, the block on Instagram during the protests exposed Beijing’s fears that people in the mainland might be inspired by the events taking place in Hong Kong. While some parts of the social media site may be restored, the site is still listed as 92 percent blocked.
In late December 2016, the Chinese government made waves by ordering Apple to remove their New York Times app from the Chinese digital app store. According to the newspaper, the app had been removed on 23 December under regulations prohibiting all apps from engaging in activities that endanger national security or disrupt social order. The New York Times website as a whole has been blocked since 2012 in China, after the newspaper published an article regarding the wealth of former prime minister Wen Jiabao and his family. People turned to the NYT app after the blockage in order to maintain access to the the paper’s stories. Now that the app is blocked as well, the New York Times is only available to those who had downloaded the app before its removal from the store.
In June of 2012, the popular business and financial information website published a story regarding the multimillion dollar wealth of Vice President Xi Jinping and his extended family. Considering this story too invasive, the Chinese government blocked Bloomberg and has yet to reopen the site to the public. At the time, the Chinese government was going through a period of transition, as power shifted from then President Hu Jintao to Jinping.
Censors in China blocked access to Twitter in June of 2009 in anticipation of the 20th anniversary of the pro-democracy protests in Tiananmen Square. The move seems to reflect the government’s anxiety when it comes to the anniversary and the sensitive memories that come with it. The blocking of Twitter has also allowed for the rise of the Chinese app Weibo, a censored Twitter clone, which quickly became one of China’s most popular.
One of the more recent bans by the Chinese government came in the form of the international news agency Reuters. In March 2015, the organisation announced that both its English and Chinese sites were no longer reachable in the country . China has blocked media outlets like Reuters in the past, but these moves have always come after the release of a controversial story. In the case Reuters, the ban seemed to have come out of nowhere, with the reason behind the blockage still unclear.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1487260644692-d841ab7e-8ed3-4″ taxonomies=”85″][/vc_column][/vc_row]
Credit: Flickr / Jason Howie
Facebook made headlines this week over allegations by former staff that the site tampers with its “what’s trending” algorithm to remove and suppress conservative viewpoints while giving priority to liberal causes.
The news isn’t likely to shock many people. Attempts to control social media activity have been rife since Facebook and Twitter launched in 2006. We are outraged when political leaders ban access to social media, or when users face arrest or the threat of violence for their posts. But it is less clear cut when social media companies remove content they deem in breach of their terms and conditions, or move to suspend or ban users they deem undesirable.
“Legally we have no right to be heard on these platforms, and that’s the problem,” Jillian C. York, director for international freedom of expression at the Electronic Frontier Foundation, tells Index on Censorship. “As social media companies become bigger and have an increasingly outsized influence in our lives, societies, businesses and even on journalism, we have to think outside of the law box.”
Transparency rather than regulation may be the answer.
Back in November 2015, York co-founded Online Censorship, a user-generated platform to document content takedowns on six social media platforms (Facebook, Twitter, Instagram, Flickr, Google+ and YouTube), to address how these sites moderate user-generated content and how free expression is affected online.
Online Censorship’s first report, released in March 2016, stated: “In the United States (where all of the companies covered in this report are headquartered), social media companies generally reserve the right to determine what content they will host, and they do not consider their policies to constitute censorship. We challenge this assertion, and examine how their policies (and the enforcement thereof) may have a chilling effect on freedom of expression.”
The report found that Facebook is by far the most censorious platform. Of 119 incidents, 25 were related to nudity and 16 were due to the user having a false name. Further down the list were content removed on grounds of hate speech (6 reports) and harassment (2).
“I’ve been talking with these companies for a long time, and Facebook is open to the conversation, even if they haven’t really budged on policies,” says York. If policies are to change and freedom of expression online strengthened, “we have to keep the pressure on companies and have a public conversation about what we want from social media”.
Critics of York’s point of view could say if we aren’t happy with the platform, we can always delete our accounts. But it may not be so easy.
Recently, York found herself banned from Facebook for sharing a breast cancer campaign. “Facebook has very discriminatory policies toward the female body and, as a result, we see a lot of takedowns around that kind of content,” she explains.
Even though York’s Facebook ban only lasted one day, it proved to be a major inconvenience. “I couldn’t use my Facebook page, but I also couldn’t use Spotify or comment on Huffington Post articles,” says York. “Facebook isn’t just a social media platform anymore, it’s essentially an authorisation key for half the web.”
For businesses or organisations that rely on social media on a daily basis, the consequences of a ban could be even greater.
Facebook can even influence elections and shape society. “Lebanon is a great example of this, because just about every political party harbours war criminals but only Hezbollah is banned from Facebook,” says York. “I’m not in favour of Hezbollah, but I’m also not in favour of its competitors, and what we have here is Facebook censors meddling in local politics.”
York’s colleague Matthew Stender, project strategist at Online Censorship, takes the point further. “When we’re seeing Facebook host presidential debates, and Mark Zuckerberg running around Beijing or sitting down with Angela Merkel, we know it isn’t just looking to fulfil a responsibility to its shareholders,” he tells Index on Censorship. “It’s taking a much stronger and more nuanced role in public life.”
It is for this reason that we should be concerned by content moderators. Worryingly, they often find themselves dealing with issues they have no expertise in. A lot of content takedown reported to Online Censorship is anti-terrorist content mistaken for terrorist content. “It potentially discourages those very people who are going to be speaking out against terrorism,” says York.
Facebook has 1.5 billion users, so small teams of poorly paid content moderators simply cannot give appropriate consideration to all flagged content against the secretive terms and conditions laid out by social media companies. The result is arbitrary and knee-jerk censorship.
“I have sympathy for the content moderators because they’re looking at this content in a split second and making a judgement very, very quickly as to whether it should remain up or not,” says York. “It’s a recipe for disaster as its completely not scalable and these people don’t have expertise on things like terrorism, and when they’re taking down.”
Content moderators — mainly based in Dublin, but often outsourced to places like the Philippines and Morocco — aren’t usually full-time staff, and so don’t have the same investment in the company. “What is to stop them from instituting their own biases in the content moderation practices?” asks York.
One development Online Censorship would like to see is Facebook making public its content moderation guidelines. In the meantime,the project will continue to strike at transparency by providing crowdsourced transparency to allow people to better understand what these platforms want from us.
These efforts are about getting users to rethink the relationship they have with social media platforms, say York. “Many treat these spaces as public, even though they are not and so it’s a very, very harsh awakening when they do experience a takedown for the first time.”
(Image: instagram.com/mb459)
I feel sorry for Mario Balotelli. I’m sure he’ll take that as a comfort; knowing he’s not the only one asking “why always Mario?” That is not to say I think he’s drifted through life blameless and immaculate: not at all. I’ve only seen him in the flesh once, when Manchester City played Arsenal. He had a terrible game and got sent off for what even I, sitting in row Z at the other end of the ground, could see was a stupid and dangerous tackle.
But I’ve had a soft spot for Balotelli ever since someone pointed out he looks like a baby dinosaur. Without wishing to infantilise him, he’s like the boy in school who can’t help getting in trouble even when he’s trying to be good. Ballotelli’s current situation is the perfect example. Last week, the player posted an image on Instagram, showing Nintendo character Super Mario (from whom the footballer takes his nickname, or at least would like to).
“Don’t be racist!”, it read (Yay!)
“Be like Mario” (LOL!)
“He’s an Italian plumber” (indeed he is)
“created by Japanese people” (correct)
“who speaks English” (sort of)
“and looks like a Mexican” (I suppose he does. A bit.)
“…jumps like a black man” (hmmm)
“and grabs coins like a Jew” (oh)
Long story short: people suggested that this might be a bit racist towards black and Jewish people, Balotelli responded that his mum is Jewish. Eventually, he took the post down and apologised. But by then it was too late. The Football Association announced over the weekend that the striker would face an investigation for using insulting and improper language with “reference to ethnic origin and/or colour and/or race and/or nationality and/or religion or belief”.
For what it’s worth, I don’t believe for a moment that Balotelli meant to insult anyone with his Instagram post. I think he entirely sincerely posted the meme seeing it as an anti-racist message. The problem for poor Mario was that his ill-judged but innocent Instagram came while the football world was actually paying attention to anti-semitism, as Wigan chairman Dave Whelan spouted a series of inappropriate race-related comments (Jews, money, you know, that stuff) after hiring former Cardiff manager Malky Mackay, who himself had run into controversy over dubious texts (Jews, money, on and on it goes).
Whelan stylishly compounded the issue with a “clarifying” interview in the Jewish Telegraph, where he spun the ”some of my best friends are…” line, saying there must be “a dozen” Jews with apartments near his residence in Majorca, and “so many Jewish people go to Barbados at Christmas. That’s when I go. I see a lot of them in the Lone Star, in restaurants. I play golf with a few of them.”
In the same interview, Whelan told how when he was younger, people called the only Chinese restaurant in Wigan “the Chingalings”, and absolutely nobody had minded (though one doubts anyone asked the Chinese people of Wigan).
Whelan now also faces charges of misconduct from the FA. I’m not about to suggest that the FA has no right to investigate Whelan, or anyone involved in professional football in England. Associations can have their own rules and standards. But it would be sad if, in football’s newfound determination to deal with discrimination, innocents such as Balotelli got caught in the dragnet.
The interesting question is whether, in combating racism, one confronts the words used, the stereotypes invoked, the intent behind them, or all three at once. Is it possible to disentangle the three?
Nowhere is this more clearly illustrated than the debate about whether Tottenham Hotspur fans should be able to chant “yids” or not. Short explanation: some Spurs fans are Jewish, many identify as the “yid army”. Some people — mostly not Spurs fans — feel that Spurs fans chanting “yids” legitimises anti-semitic chanting by fans of other teams. Spurs fans say it’s their chant and their word and they are using it positively. Unpick that one, sports fans.
Words in and of themselves are neutral entities. Does saying the word “yid” — in and of itself — make me more or less anti-semitic? No. But the creation of a taboo can elevate a word, bringing a certain thrill to its use. In a society where, more or less, we have decided bigotry is a bad thing (which is not to suggest a society where bigotry is no longer a problem), the use of words and phrases associated with bigotry can take on a thrill of its own, as much for the well intentioned as for the malevolent. The bad taste joke, the inappropriate interjection, the drunken football chant using the words you might not be supposed to use, are the shared cigarette behind the school bike shed; the shared, line-crossing moments that so often bond people.
The joy for most Unilad Bantalopes lies in that shared bond. For the person who created the meme which ill-feted Mario Balotelli shared (very much from the Bantasauraus school), one could simultaneously attempt to be anti-racist and use racial stereotypes. Human beings are complicated like that. And that’s why a zero-tolerance approach to words and meanings is unlikely to work on us.
This article was posted on 11 December 2014 at indexoncensorship