Content bans won't just eliminate "bad" speech online
06 Jun 2019

Social media platforms have enormous influence over what we see and how we see it.

We should all be concerned about the knee-jerk actions taken by the platforms to limit legal speech and approach with extreme caution any solutions that suggest it’s somehow easy to eliminate only “bad” speech.

Those supporting the removal of videos that “justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status” might want to pause to consider that it isn’t just content about conspiracy theories or white supremacy that will be removed.

In the wake of YouTube’s announcement on Wednesday 5 June, independent journalist Ford Fischer tweeted that some of his videos, which report on activism and extremism, had been flagged by the service for violations. Teacher Scott Allsopp had his channel featuring hundreds of historical clips deleted for breaching the rules that ban hate speech, though it was later restored with some videos still flagged.

It’s not just Google’s YouTube that has tripped over the inconsistent policing of speech online.

Twitter has removed tweets for violating its community standards as in the case of US high school teacher and activist Carolyn Wysinger, whose post in response to actor Liam Neeson saying he’d roamed the streets hunting for black men to harm, was deleted by the platform. “White men are so fragile,” the post read, “and the mere presence of a black person challenges every single thing in them.”

In the UK, gender critical feminists who have quoted academic research on sex and gender identity have had their Twitter accounts suspended for breaching the organisation’s hateful conduct policy, while threats of violence towards women often go unpunished.

Facebook, too, has suspended the pages of organisations that have posted about racist behaviours.

If we are to ensure that all our speech is protected, including speech that calls out others for engaging in hateful conduct, then social media companies’ policies and procedures need to be clear, accountable and non-partisan. Any decisions to limit content should be taken by, and tested by, human beings. Algorithms simply cannot parse the context and nuance sufficiently to distinguish, say, racist speech from anti-racist speech.

We need to tread carefully. While an individual who incites violence towards others should not (and does not) enjoy the protection of the law, on any platform, or on any kind of media, tackling those who advocate hate cannot be solved by simply banning them.

In the drive to stem the tide of hateful speech online, we should not rush to welcome an ever-widening definition of speech to be banned by social media.

This means we – as users – might have to tolerate conspiracy theories, the offensive and the idiotic, as long as it does not incite violence. That doesn’t mean we can’t challenge them. And we should.

But the ability to express contrary points of view, to call out racism, to demand retraction and to highlight obvious hypocrisy depend on the ability to freely share information.

The UK government’s online harms white paper: implications for freedom of expression

Parliament must be fully involved in shaping the government’s proposals for online regulation as the proposals have the potential to cause large-scale impacts on freedom of expression and other rights.

Joint letter to Information Commissioner on “age appropriate” websites plan

We write to you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the ICO’s Age Appropriate Design Code.

European Commission must mitigate concerns on automated upload filters

We consider that, in order to mitigate these concerns, it is of utmost importance that the European Commission and Member States engage in a constructive transposition and implementation to ensure that the fears around automated upload filters are not realized.

Online harms and media freedom: UK response to Council of Europe lacks concrete details

“The UK’s response to our Council of Europe alert lacks concrete details about how government proposals dealing with online harms will not damage media freedom and the public’s right to information,” said Joy Hyvarinen, head of advocacy, Index on Censorship.

Comments are closed.