Future of freedom of expression online does not have to be a dark one
UN Special Rapporteur David Kaye speaks to Timandra Harkness about free speech and the internet
12 Jul 19

[vc_row][vc_column][vc_single_image image=”107886″ img_size=”full”][vc_column_text]David Kaye’s legal career began a decade before the rise of modern social media. Yet Kaye, a professor of international human rights law at the University of California, Berkeley and the UN Special Rapporteur on the right to Freedom of Opinion and Expression, has had to adapt his legal practice to the complex ways in which the internet can be used to prohibit us from — and in some cases, empower us to — freely share information and opinions. 

Kaye spoke to presenter, writer and comedian Timandra Harkness on 9 July to promote his recent book on regulating online freedom of expression, Speech Police: The Global Struggle to Govern the Internet. 

To begin the conversation, Harkness asked Kaye to briefly overview some of the threats he perceived to free expression online, and how he thought that international human rights law was applicable to the current debate over what content, if any, to censor online. Kaye explained that the language of human rights law, particularly article 19 of the Universal Declaration of Human Rights, seemed designed for the digital age despite being penned in the 1940s. 

He also discussed how changes to the internet have affected governments and corporations ability to regulate content. The decentralised, “blogosphere”-style internet of the early years of Kaye’s career was much harder to regulate than the current internet, in which a few large companies provide massive platforms where large percentages of internet discourse and information sharing take place. “Without necessarily acting as censors … those companies [now] determine the boundaries of what we see online,” Kaye said. 

That boundary, unfortunately, is increasingly defined by algorithms rather than people. It is always valuable to know “what is feeding the algorithm,” Kaye explained. The reason that algorithms for most social media companies function the way they do is that “they are algorithms for engagement … [that’s a] problem with the business model.” Rather than filling news feeds with the most important news, algorithms are designed to maximize engagement, sometimes at the detriment of providing users with a diversity of information. This raises questions as to the ability of social media companies to maximize free access to information. 

For people living under repressive regimes, however, internal social media regulation may be preferable to allowing the government to regulate and censor speech as it often does in traditional media, Kaye argued. Yet at the same time, social media companies have “entrenched interests” that influence the way that they regulate speech, which can be murkier than those of the government and far more arbitrary. Kaye specifically mentioned the case of Germany, which prohibits speech about Holocaust denial. In that case, Kaye argued, the German government might ask that Facebook take down any content promoting Holocaust denial, but also implicitly gives Facebook the ability to determine what content actually constitutes Holocaust denial rather than leaving the decision to a German court.

This “essentially asks those companies to determine what is legal under those countries’ laws, outsourcing the decision” to media platforms to be enforced in difficult-to-verify ways, Kaye noted.

Another concern Kaye expressed for the confluence of government censorship and internal social media company decisions was about terms of service. “[terms of service] go beyond what governments can regulate in law,” he added. Since terms of service can censor what many governments legally cannot, it is impossible to know how often governments manipulate the terms of service to suppress speech that is legal but inconvenient. That, said Kaye, is “itself a kind of government censorship.”

Kaye ended by speaking about the rule of law, which he viewed as a way to counteract some of the unregulated content moderation that happens on social media sites. “I think we have missed an opportunity for countries that have strong rule-of-law traditions that could have thought more creatively about regulating [social media],” he said. Kaye continued that, pessimistically, we now don’t know whether countries with rule-of-law traditions will treat social media with a “rule-of-law framework.”

The future of freedom of expression online does not have to be a dark one. Rule-of-law countries should “model what they want freedom of expression to look like in the future,” he concluded. It is up to countries like the UK, he argued, to set an example for the future of free expression regulation of the online platforms that will define the free speech debates of the 21st century. [/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1562928954931-7373ff6c-362f-3″ taxonomies=”16927, 4883″][/vc_column][/vc_row]

By Maya Rubin

Maya Rubin is an undergraduate intern at Index on Censorship. She is a student at Wellesley College and was an Adam Smith Fellow at the Freedom Project 2018-2019.