Platform or publisher?
Free speech versus hate speech?
Personal responsibility versus corporate responsibility?
What is social media for and who is responsible for what?
This week Donald Trump, the former American President, once again made international headlines. This time not for something he had said that day – but rather on whether he has the right to have a public platform on social media, and Facebook in particular.
Many of us undoubtedly have a significant problem with many of the tweets that the former President posted during his tenure. My personal politics are well known, and no one would be surprised for a second that I found Trump to be abhorrent. He does have the right however, as a citizen in a free society to be offensive and abusive (although not to incite violence – which I believe he did).
But the question at hand isn’t actually about his free speech or whether he has the right to be on social media or not. Rather the question is – what is social media? Is it a publisher of content that is legally responsible for the words and deeds of their users or is it a platform which facilitates debate (and in too many cases hate)?
Social media is a core part of many of our lives. At times of crisis, both personal and national, it can be a blessing, letting you know friends and family are safe. At its best it can and should inspire thoughtful debate and challenge the status-quo. But at its worst it can bring out the very worst in every one of us. It can incite hate, racism, misogyny, harassment, bullying and violence. It can radicalise. But is can also entertain and inform. In other words, social media and its impact is as complex as the people who use it.
Given how we all use it, it is easy to consider social media a free public space, one that we all have access to without restriction. But social media companies are exactly that – private companies – who get to decide who uses their services and how they get to use them. That doesn’t make them inherently bad, but it does mean that they have their own rationale for operation. It also means we don’t have either an intrinsic right to use or and complete free speech on them – unless they allow it.
In the months ahead I’m going to be speaking a lot about the Online Safety Bill, the legislation progressing through the British parliament regarding our online access and future regulation. There is clearly a cultural problem on social media – it can all too often be a grim place to spend time; we need to recognise that and help fix it. But not at the cost of protections for free speech, our right to debate and engage.
There needs to be space to be offended – while at the same time protecting people from hate and violence. We also need to remember that personal responsibility is relevant in this debate and no one acts with impunity.
So the challenge for all of us – is helping to make social media a better place while protecting our core rights – a balance that we must find.
“…protecting people from hate and violence”.
How do you protect people from hate? I understand violence but hate? Only a few years ago, saying marriage was between a man and a woman was considered mainstream, but offensive to the LBGTQ community. Now in the US it is classified as a hate crime and results in multiple different punishments.
Do we protect people from someone believing and saying marriage is between a man and a woman?
Children that use the Internet social sites can be very cruel and hateful. The power of communication with these tools adds to the power of hurtful statements normally limited to the playground, but is that hate we must protect people from? Does calling someone fat hate?
I believe statements that lead to violence, intentional or not, require justice in a legal form. I’m not sure how you implement protections for someone hating another, or more importantly, someone else deciding you hate someone.