After seven years of debate, five Secretaries of State and hours and hours of parliamentary discussion the Online Safety Bill has reached the second chamber of the British legislature. In the coming months new language will be negotiated, legislative clauses ironed out and deals will be done with the government to get it over the line. But the question for Index is what will be the final impact of the legislation on freedom of expression and how will we know how much content is being deleted as a matter of course.
The team at Index have been working, with partners, for several years to try and ensure that freedom of expression online is protected within the legislation. That the unintended consequences of the bill don’t impinge our rights to debate, argue, inspire and even dismiss each other on the online platforms which are now fundamental to many of our daily lives. After all, in a post Covid world, many of us don’t differentiate between time spent online and time spent in real life. They are typically one and the same. That isn’t to say however that as a society we have managed to establish social norms online (as we have offline) which allow the majority of us to go about our daily lives without unnecessary conflict and pain.
We’ve been working so intently on this bill not just because we want to protect digital rights in the UK but because this legislation is likely to set a global standard. So restrictions on speech in this legislation will give cover to tyrants and bad faith actors around the world who seek to use some aspects of this new law to impinge on the free expression of their own populations. Which is why our work on this bill is so important.
We still have two main concerns about the legislation in its current format. The first is the definition, identification and deletion of illegal content. The legislation currently demands that the platforms determine what is illegal and then automatically delete the content so it can’t be seen, shared or amplified. In theory that sounds completely reasonable, but given the sheer scale of content on social media platforms these determinations will have to be made by algorithms not people. And as we know algorithms have built-in bias and they struggle to identify nuance, satire or context. That’s even more the case when the language isn’t English or the content is imagery rather than words. When you add in the concept of corporate fines and executive prosecution it’s likely that most platforms will opt to over-delete rather than potentially fall foul of the new regulatory regime. Content that contains certain keywords, phrases, or images are likely to be deleted by default – even if the context is the opposite of their normal use. The unintended consequence of seeking to automatically delete content without providing platforms with a liability shield so they can retain the posts without being criminally liable will lead to mass over-deletion.
The second significant concern are the proposals to break end-to-end encryption. The government claim that this would only be for finding evidence of child abuse, which again sounds reasonable. But end-to-end encryption cannot be a halfway house. Something is either encrypted to ensure privacy or it isn’t and therefore can be hacked. And whilst no one would or should defend those who use such tools to hurt children, we need to consider how else the tools are used. By dissidents to tell their stories, by journalists to protect their sources, by families to share children’s photos, by banks and online retailers to keep us financially protected and by victims of domestic violence to plan their escape. This is not a simple tool which can be broken at whim, we need to find a way to make sure that everyone is protected while we seek to protect children from child abusers. This cannot be beyond the wit of our legislators and in the coming months we’ll find out.