NEWS
Facebook's online shaming mobs

This article is the part of the Index on Censorship Young Writers / Artists Programme

09 Jul 2014
BY KATIE DANCEY
bnpfaceboo

(Image: Katie Dancey)

 

Twitter trolls, online mobs and “offensive” Facebook posts are constantly making headlines as authorities struggle to determine how to police social media. In a recent development, links posted on Facebook allow users to see which of their friends have “liked” pages, such as those representing Britain First, the British National Party and the English Defence League. When clicking the links, a list appears of friends who have liked the page in question. Many Facebook users have posted the links, with the accompanying message stating their intention to delete any friends found on the lists. One user wrote, “I don’t want to be friends, even Facebook friends, with people who support fascist political parties, so this is just a quick message to give you a chance to unlike the Britain First page before I un-friend you.” Tackling racism is admirable, but when the method is blackmail and intimidation, who is in the wrong? All information posted on Facebook could be considered as public property, but what are the ethical implications of users taking it upon themselves to police the online activity of their peers? When social media users group together to participate in online vigilantism, what implications are there for freedom of expression?

This online mob is exercising its right to freedom of expression by airing views about right wing groups. However, in an attempt to tackle social issues head on, the distributors of these links are unlikely to change radical right-wing ideologies, and more likely to prohibit right-wing sympathisers from speaking freely about their views. In exerting their right to free speech, mobs are at risk of restricting that of others. The opinions of those who feel targeted by online mobs won’t go away, but their voice will. The fear of losing friends or being labelled a racist backs them into a corner, where they are forced to act in a particular way, creating a culture of self-censorship. Contrary to the combating of social issues, silencing opinion is more likely to exacerbate the problem. If people don’t speak freely, how can anyone challenge extreme views? By threatening to remove friends or to expose far right persuasions, are the online vigilantes really tackling social issues, or are they just shutting down discussions by holding friendships to ransom?

Public shaming is no new tactic, but its online use has gone viral. Used as a weapon to enforce ideologies, online witch-hunts punish those who don’t behave as others would want them to. Making people accountable for their online presence, lynch mobs target individuals and shame them into changing their behaviour. The question is whether groups are revealing social injustices that would otherwise go unpunished, or whether they are using bullying tactics in a dictatorial fashion. The intentions of the mob in question are good; to combat racism. But does that make their methods justifiable? These groups often promote a “with us or against us” attitude; if you don’t follow these links and delete your racist friends, you must be a racist too. Naming and shaming those who don’t follow the cultural norm is also intended to dissuade others from participating in similar activities. Does forcing people into acting a certain way actually generate any real change, or is it simply an act of censorship?

With online mobs often taking on the roles of judge, jury and executioner, the moral implications of their activities are questionable. It may start as a seemingly small Facebook campaign such as this one, but what else could stem from that? One Facebook user commented, “Are you making an effort to silence your Facebook friends who are to the right of centre?” This concern that the target may become anyone with an alternative political view demonstrates the cumulative nature of online mobs. Who polices this activity and who decides when it has gone too far?

Comments under the Facebook posts in question invite plenty of support for the deletion of any friends who “like” far right groups, but very rarely does anyone question the ethics of this approach. No longer feeling they have to idly stand by, Facebook users may feel they can make an impact through strength in numbers and a very public forum. Do those who haven’t previously had a channel for tackling social issues suddenly feel they have a public voice? Sometimes it’s difficult to accept that absolutely everyone has the right to free speech, even those who hold extreme views. In a democracy, there may be political groups that offend us, but those groups still have a right to be heard. The route to tackling those views can’t be to silence them, but to encourage discussion.

This article was posted on July 9, 2014 at indexoncensorship.org

Katie Dancey

Katie Dancey is a freelance journalist who graduated from the University of Birmingham with a BA (hons) in Drama and Theatre Arts.

4 responses to “Facebook’s online shaming mobs”

  1. Michael says:

    > “Facebook and Twitter are just two of many platforms”

    yes they ARE just two of MANY

    the problem is the trend towards extreme centralisation.

    censorship is pretty much an inevitable end result if that goes too far.

    When you have billions of people using the same platform controlled by one single company, that company becomes a pretty sttractive target for anyone with power, influence, lots of money or able to get a court order wanting to prevent lots of people from saying or hearing about something as well as driving things towards another less obvious but possibly even more dangerous form of censorship by self-censorship where people don’t see much about anything they *might* not agree with and don’t even realise what is happening!

    either way extreme centralisation has a dark side.

    pretty obvious stuff really

    Facebook will probably mostly do whatever keeps its shareholders happy.
    if users don’t like it why would they care about those user complaints if most of them just keep using it and don’t even look at anything else?

    nothing can really help those who refuse to help themselves,

    and as you hinted, there are lots of other platforms out there
    including some excellent decentralised federated networks
    (where the network is not controlled by any one single company or organisation. – if you don’t like the admin policy on one site you can choose another or even run one yourself and still use the network and have contacts on any of the sites in the network)
    (eg: Friendica, Diaspora, RedMatrix, Pumpio, StatusNet, etc)

    and aside from personal feeds and things organised around contacts there are also still plenty of other “social” things to be found out there –
    forums, communities, newsgroups, bulletin boards, events sites, chat networks, email discussion lists, etc, etc, etc – the internet is still out there!

  2. Jillian C. York says:

    “Does forcing people into acting a certain way actually generate any real change, or is it simply an act of censorship?”

    This is a false dichotomy. Public shaming may not be an effective tactic, but nor is it censorship. Rather, public shaming is a social consequence to speech. There are surely better ways of changing minds, but calling it censorship is disingenuous.

    • Kat says:

      When the goal is to shame people into relinquishing their views and to preemptively shame anyone who might otherwise reveal an opposing viewpoint, I’d call it censorship.

  3. Robert says:

    There is another dynamic to all this, which is the role that the social media platforms themselves play in mediating such debates and dialogue.

    First, Facebook and Twitter are just two of many platforms. Political discussions do not *have* to take place on those services. So cutting connections on those social media sites is not quite the same as ‘censorship’ in the formal sense that we understand it.

    Nevertheless, the issue remains because of the dominance of a very small number of social media sites. They distort political debate in several ways. Most obviously, Twitter reduces argument to 140 characters, killing nuance and ambiguity as it does so. Many a flame war was started simply because the Tweeters did not have space to say “in my opinion” or “I may be wrong, but” or “what do you think?”. Facebook’s algorithms show us stuff it knows we ‘like’, thereby creating a personalised bubble for each of us where challenging views are excised in favour of posts it predicts we will agree with. Language plays a part here: there is a ‘like’ button, which doubles as a ‘subscribe’ button. I may wish to follow the BNP posts to be aware of what nonsense they’re spouting, but I’ll be damned if I am going to click a button labelled ‘like’ in order to do so!

    Finally, there is the inconsistent way in which social media sites apply their own, privatised censorship. I’m sure Index on Censorship has logged this issue and campaigned for ToS changes in the past. Violent and misogynist groups are allowed to exist while tastefully erotic groups are blocked. There is no oversight and few means of redress if you fall foul of an internal blocking decision.