NEWS

Content bans won’t just eliminate “bad” speech online
[vc_row][vc_column][vc_column_text]Social media platforms have enormous influence over what we see and how we see it. We should all be concerned about the knee-jerk actions taken by the platforms to limit legal speech and approach with extreme caution any solutions that suggest it’s somehow easy to eliminate only “bad” speech. Those supporting the removal of videos […]
06 Jun 19

[vc_row][vc_column][vc_column_text]Social media platforms have enormous influence over what we see and how we see it.

We should all be concerned about the knee-jerk actions taken by the platforms to limit legal speech and approach with extreme caution any solutions that suggest it’s somehow easy to eliminate only “bad” speech.

Those supporting the removal of videos that “justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status” might want to pause to consider that it isn’t just content about conspiracy theories or white supremacy that will be removed.

In the wake of YouTube’s announcement on Wednesday 5 June, independent journalist Ford Fischer tweeted that some of his videos, which report on activism and extremism, had been flagged by the service for violations. Teacher Scott Allsopp had his channel featuring hundreds of historical clips deleted for breaching the rules that ban hate speech, though it was later restored with some videos still flagged.

It’s not just Google’s YouTube that has tripped over the inconsistent policing of speech online.

Twitter has removed tweets for violating its community standards as in the case of US high school teacher and activist Carolyn Wysinger, whose post in response to actor Liam Neeson saying he’d roamed the streets hunting for black men to harm, was deleted by the platform. “White men are so fragile,” the post read, “and the mere presence of a black person challenges every single thing in them.”

In the UK, gender critical feminists who have quoted academic research on sex and gender identity have had their Twitter accounts suspended for breaching the organisation’s hateful conduct policy, while threats of violence towards women often go unpunished.

Facebook, too, has suspended the pages of organisations that have posted about racist behaviours.

If we are to ensure that all our speech is protected, including speech that calls out others for engaging in hateful conduct, then social media companies’ policies and procedures need to be clear, accountable and non-partisan. Any decisions to limit content should be taken by, and tested by, human beings. Algorithms simply cannot parse the context and nuance sufficiently to distinguish, say, racist speech from anti-racist speech.

We need to tread carefully. While an individual who incites violence towards others should not (and does not) enjoy the protection of the law, on any platform, or on any kind of media, tackling those who advocate hate cannot be solved by simply banning them.

In the drive to stem the tide of hateful speech online, we should not rush to welcome an ever-widening definition of speech to be banned by social media.

This means we – as users – might have to tolerate conspiracy theories, the offensive and the idiotic, as long as it does not incite violence. That doesn’t mean we can’t challenge them. And we should.

But the ability to express contrary points of view, to call out racism, to demand retraction and to highlight obvious hypocrisy depend on the ability to freely share information.[/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560160119940-326df768-f230-4″ taxonomies=”4883″][/vc_column][/vc_row]