Future of freedom of expression online does not have to be a dark one

[vc_row][vc_column][vc_single_image image=”107886″ img_size=”full”][vc_column_text]David Kaye’s legal career began a decade before the rise of modern social media. Yet Kaye, a professor of international human rights law at the University of California, Berkeley and the UN Special Rapporteur on the right to Freedom of Opinion and Expression, has had to adapt his legal practice to the complex ways in which the internet can be used to prohibit us from — and in some cases, empower us to — freely share information and opinions. 

Kaye spoke to presenter, writer and comedian Timandra Harkness on 9 July to promote his recent book on regulating online freedom of expression, Speech Police: The Global Struggle to Govern the Internet. 

To begin the conversation, Harkness asked Kaye to briefly overview some of the threats he perceived to free expression online, and how he thought that international human rights law was applicable to the current debate over what content, if any, to censor online. Kaye explained that the language of human rights law, particularly article 19 of the Universal Declaration of Human Rights, seemed designed for the digital age despite being penned in the 1940s. 

He also discussed how changes to the internet have affected governments and corporations ability to regulate content. The decentralised, “blogosphere”-style internet of the early years of Kaye’s career was much harder to regulate than the current internet, in which a few large companies provide massive platforms where large percentages of internet discourse and information sharing take place. “Without necessarily acting as censors … those companies [now] determine the boundaries of what we see online,” Kaye said. 

That boundary, unfortunately, is increasingly defined by algorithms rather than people. It is always valuable to know “what is feeding the algorithm,” Kaye explained. The reason that algorithms for most social media companies function the way they do is that “they are algorithms for engagement … [that’s a] problem with the business model.” Rather than filling news feeds with the most important news, algorithms are designed to maximize engagement, sometimes at the detriment of providing users with a diversity of information. This raises questions as to the ability of social media companies to maximize free access to information. 

For people living under repressive regimes, however, internal social media regulation may be preferable to allowing the government to regulate and censor speech as it often does in traditional media, Kaye argued. Yet at the same time, social media companies have “entrenched interests” that influence the way that they regulate speech, which can be murkier than those of the government and far more arbitrary. Kaye specifically mentioned the case of Germany, which prohibits speech about Holocaust denial. In that case, Kaye argued, the German government might ask that Facebook take down any content promoting Holocaust denial, but also implicitly gives Facebook the ability to determine what content actually constitutes Holocaust denial rather than leaving the decision to a German court.

This “essentially asks those companies to determine what is legal under those countries’ laws, outsourcing the decision” to media platforms to be enforced in difficult-to-verify ways, Kaye noted.

Another concern Kaye expressed for the confluence of government censorship and internal social media company decisions was about terms of service. “[terms of service] go beyond what governments can regulate in law,” he added. Since terms of service can censor what many governments legally cannot, it is impossible to know how often governments manipulate the terms of service to suppress speech that is legal but inconvenient. That, said Kaye, is “itself a kind of government censorship.”

Kaye ended by speaking about the rule of law, which he viewed as a way to counteract some of the unregulated content moderation that happens on social media sites. “I think we have missed an opportunity for countries that have strong rule-of-law traditions that could have thought more creatively about regulating [social media],” he said. Kaye continued that, pessimistically, we now don’t know whether countries with rule-of-law traditions will treat social media with a “rule-of-law framework.”

The future of freedom of expression online does not have to be a dark one. Rule-of-law countries should “model what they want freedom of expression to look like in the future,” he concluded. It is up to countries like the UK, he argued, to set an example for the future of free expression regulation of the online platforms that will define the free speech debates of the 21st century. [/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1562928954931-7373ff6c-362f-3″ taxonomies=”16927, 4883″][/vc_column][/vc_row]

Breaking the digital sphere: a campaign to fight for protection of online spaces

[vc_row][vc_column][vc_column_text]Social media platforms wield immense control over the information we see online. With rising pressure from governments; and increasing reliance on algorithms, social media platforms are in danger of silencing millions of activists and marginalised groups across the world with content takedowns and blocked accounts.

You are invited to hear the views from our panellists and take part in a discussion that will shape ARTICLE19 campaign.

Our vision for the campaign is to safeguard freedom of expression online. This cannot be achieved without better accountability and transparency. We will be calling on social media platforms to respect due process guarantees and create clear and transparent mechanisms to enforce such guarantees. Some questions we will be discussing include:

● How content takedown and account deactivation is affecting activism?
● What is the scale of the problem and its impact on free speech?
● What is the role of authorities for content takedown on social media platforms?
● What can be done to improve accountability and transparency online?
● What can the campaign do to amplify voices of those seeking change?[/vc_column_text][vc_custom_heading text=”Panelists” font_container=”tag:h3|text_align:left” use_theme_fonts=”yes”][vc_row_inner][vc_column_inner width=”1/2″][vc_single_image image=”107205″ img_size=”full”][vc_column_text]Thomas Hughes has been executive director of ARTICLE 19 since 2013. For the past two decades, Hughes has worked on human rights and media development issues, including as deputy director of International Media Support (IMS) between 2005 and 2010, as well as previously for the United Nations, European Commission and Organisation for Security and Cooperation in Europe (OSCE).[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/2″][vc_single_image image=”107204″ img_size=”full”][vc_column_text]Jennifer Robinson is a barrister in London. Her practice focuses on international law, free speech and civil liberties. She advises media organisations, journalists and whistle-blowers on all aspects of media law. Robinson serves as a trustee for the Bureau for Investigative Journalism, and sits on the advisory board of the European Center for Constitutional and Human Rights and the Bonavero Human Rights Institute at the University of Oxford.[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_row_inner][vc_column_inner width=”1/2″][vc_single_image image=”107206″ img_size=”full”][vc_column_text]Pavel Marozau is a civic and internet activist. He was under politically motivated persecution by the Belarusian authorities for producing satirical animated films casting president Lukashenko, being accused of slandering the Belarusian president. During the Geneva Summit, Marozau founded a network of activists from Iran, Burma, Venezuela, Cuba, Zimbabwe, and Egypt, as well as a founder of counter-propaganda web-television ARU TV.[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/2″][vc_single_image image=”107207″ img_size=”full”][vc_column_text]Paulina Gutiérrez is an international human rights lawyer and internet freedom advocate in Latin America. She holds a degree in law and another one in international relations. During the last four years, Gutiérrez designed and developed the digital rights agenda for ARTICLE19 Mexico and Central America Regional Office. She’s also a member of INDELA’s Advisory Board and BENETECH’s Human Rights Program Advisory Board.[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_column_text]

When: Thursday 20 June 5:30-10pm
Where: The Law Society’s Hall, 113 Chancery Lane, London WC2A 1PL
Tickets: Free. Registration required via Eventbrite

[/vc_column_text][vc_column_text]Presented in partnership with[/vc_column_text][vc_row_inner][vc_column_inner width=”1/4″][vc_single_image image=”60288″ img_size=”full”][/vc_column_inner][vc_column_inner width=”1/4″][/vc_column_inner][vc_column_inner width=”1/4″][/vc_column_inner][vc_column_inner width=”1/4″][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row]

Content bans won’t just eliminate “bad” speech online

[vc_row][vc_column][vc_column_text]Social media platforms have enormous influence over what we see and how we see it.

We should all be concerned about the knee-jerk actions taken by the platforms to limit legal speech and approach with extreme caution any solutions that suggest it’s somehow easy to eliminate only “bad” speech.

Those supporting the removal of videos that “justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status” might want to pause to consider that it isn’t just content about conspiracy theories or white supremacy that will be removed.

In the wake of YouTube’s announcement on Wednesday 5 June, independent journalist Ford Fischer tweeted that some of his videos, which report on activism and extremism, had been flagged by the service for violations. Teacher Scott Allsopp had his channel featuring hundreds of historical clips deleted for breaching the rules that ban hate speech, though it was later restored with some videos still flagged.

It’s not just Google’s YouTube that has tripped over the inconsistent policing of speech online.

Twitter has removed tweets for violating its community standards as in the case of US high school teacher and activist Carolyn Wysinger, whose post in response to actor Liam Neeson saying he’d roamed the streets hunting for black men to harm, was deleted by the platform. “White men are so fragile,” the post read, “and the mere presence of a black person challenges every single thing in them.”

In the UK, gender critical feminists who have quoted academic research on sex and gender identity have had their Twitter accounts suspended for breaching the organisation’s hateful conduct policy, while threats of violence towards women often go unpunished.

Facebook, too, has suspended the pages of organisations that have posted about racist behaviours.

If we are to ensure that all our speech is protected, including speech that calls out others for engaging in hateful conduct, then social media companies’ policies and procedures need to be clear, accountable and non-partisan. Any decisions to limit content should be taken by, and tested by, human beings. Algorithms simply cannot parse the context and nuance sufficiently to distinguish, say, racist speech from anti-racist speech.

We need to tread carefully. While an individual who incites violence towards others should not (and does not) enjoy the protection of the law, on any platform, or on any kind of media, tackling those who advocate hate cannot be solved by simply banning them.

In the drive to stem the tide of hateful speech online, we should not rush to welcome an ever-widening definition of speech to be banned by social media.

This means we – as users – might have to tolerate conspiracy theories, the offensive and the idiotic, as long as it does not incite violence. That doesn’t mean we can’t challenge them. And we should.

But the ability to express contrary points of view, to call out racism, to demand retraction and to highlight obvious hypocrisy depend on the ability to freely share information.[/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560160119940-326df768-f230-4″ taxonomies=”4883″][/vc_column][/vc_row]

UK government proposals to tackle online harms pose real risk to online freedom of expression

[vc_row][vc_column][vc_single_image image=”103235″ img_size=”full”][vc_column_text]The Rt Hon Jeremy Wright QC MP
Secretary of State for Digital, Culture, Media and Sport
100 Parliament Street
London SW1A 2BQ

6 March 2019

Re: Online Harms White Paper

Dear Secretary of State,

We write to you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the government’s focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, we published a joint statement noting that any proposals are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression. We have also met with your officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns. With the publication of the Online Harms White Paper imminent, we wanted to write to you personally. A number of our organisations wrote to you about this last summer, and your office kindly offered to meet us. We would be very keen to meet in person, if that offer is still open.

While we recognise and support the government’s legitimate desire to tackle unlawful and harmful content online, the proposals that have been mooted publicly by government ministers in recent months – including a new duty of care on social media platforms, a new regulatory body, and even the fining and banning of social media platforms as a sanction – have reinforced our initial concerns over the serious risks to freedom of expression online that could stem from the government’s proposals. These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European Convention on Human Rights, amongst other international treaties.

Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing. There is a real risk that the currently mooted proposals may lead to disproportionate amounts of speech being curtailed, undermining the right to freedom of expression.

Given this risk, we believe that it is essential for human rights requirements and considerations to be at the heart of the policymaking process. We urge the government to take a ‘human rights by design’ approach towards all legislation, regulation and other measures ultimately proposed. In particular, we make the following specific recommendations:

  • First, the government must set out a clear evidence base in relation to any proposals put forward in the Online Harms White Paper. The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures’ likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures proposed in the White Paper should be supported by clear and unambiguous evidence of their need and effectiveness.
  • Second, we urge the government to fully to consider non-legislative measures before opting for regulation in this field. Other potentially highly effective options such as increasing public awareness and digital literacy, a curriculum and resource focus on digital skills in schools, promoting “safety by design” amongst tech product designers and developers, and supporting existing initiatives being undertaken, should be set out in the Online Harms White Paper.
  • Third, greater transparency on the part of social media platforms and others involved in the moderation and removal of online content should be the starting point when it comes to any regulation being considered. Transparency should not simply focus on the raw number of pieces of content flagged and removed; it should instead more holistically require platforms to provide user-accessible information about the policies they have in place to respond to unlawful and harmful content, how those policies are implemented, reviewed and updated to respond to evolving situations and norms, and what company or industry-wide steps they have or are planning to improve these processes.
  • Fourth, we strongly caution against proposals which attach liability to platforms for third party content, such as a binding Code of Practice, a new ‘duty of care’ or a new regulatory body. While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.(1)
  • Fifth, we expect any legislative or regulatory proposals to contain explicit and unambiguous language on the importance of freedom of expression. It is vital that any legislative or regulatory scheme which seeks to limit speech explicitly references the human right to free expression so that this infuses how the scheme is implemented and enforced in practice. Such language should be set out both any legislation ultimately proposed, as well as any secondary legislation or regulatory guidance ultimately developed.
  • Sixth, in recognition of the UK’s commitment to the multistakeholder model of internet governance, we stress the importance for all relevant stakeholders, including civil society, to be fully engaged throughout the Online Harm White Paper’s consultation period, and able to participate in the design and implementation of any measures which are finally adopted.

We appreciate your consideration of these points and look forward to continuing our engagement with your department as the Online Harms White Paper is published and throughout the policy process.

Yours sincerely,[/vc_column_text][vc_row_inner][vc_column_inner width=”1/3″][vc_column_text]Charles Bradley
Executive Director
Global Partners Digital[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/3″][vc_column_text]Jodie Ginsberg
Chief Executive
Index on Censorship[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/3″][vc_column_text]Jim Killock
Executive Director
Open Rights Group[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_column_text]
1. See, for example, Scott, M. and Delcker, J., “Free speech vs. censorship in Germany”, Politico, 14 January 2018, available at: https://www.politico.eu/article/germany-hate-speech-netzdg-facebook-youtube-google-twitter-free-speech, and Kinstler, L., “Germany’s Attempt to Fix Facebook Is Backfiring”, The Atlantic, 18 May 2018, available at: https://www.theatlantic.com/international/archive/2018/05/germany-facebook-afd/560435/.[/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1551880941891-44b3d529-2ac3-9″ taxonomies=”16927, 4883″][/vc_column][/vc_row]