Online harms and media freedom: UK response to Council of Europe lacks concrete details

[vc_row][vc_column][vc_column_text]

The UK has responded to an official alert to the Council of Europe’s platform to protect journalism, which was filed by Index on Censorship and co-submitted by the Association of European Journalists on 25 April 2019. The alert highlights the risks to media freedom in proposals in the government’s recently released online harms white paper.

The UK response is a copy of a letter from Culture Secretary Jeremy Wright to Ian Murray of the Society of Editors, who had raised concerns that the proposals, designed to combat support for terrorism, child abuse and other harms on the internet, would also impinge on press freedom.

The letter from Jeremy Wright states that journalistic or editorial content will not be affected by the proposed regulatory framework. However, it is very difficult to see how this could be avoided.

The proposals in the white paper cover companies of all sizes (including non-profit organisations). For example, a blog and comments would be included under the remit of a proposed online content regulator.

The letter futher states that the future “ …regulator will not be responsible for policing truth and accuracy online”. However, a new legal duty of care would cover a very broad range of “harms”, for example disinformation and violent content. In combination with substantial fines and potentially even personal criminal liability for senior managers it would create a very strong incentive for platforms to remove content proactively – including news stories that might be deemed ‘harmful’ by the regulator although their contents is not illegal.

Other proposals in the white paper may also have damaging impacts on media freedom, such as potential ISP blocking (making a platform inaccessible in or from the UK). The draft provisions for ensuring protection of users’ rights online, particularly freedom of expression, rights to privacy and the public interest are now sketchy and inadequate, but they need to be robust, detailed and provided for in law.

Index and AEJ look forward to a more comprehensive UK state reply to the Council of Europe, which explains unequivocally and in concrete detail how proposals in the white paper will be made compatible with the UK’s obligations to safeguard journalism and media freedom.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560957147075-368b52fb-c52c-5″ taxonomies=”32807″][/vc_column][/vc_row]

UK government proposals to tackle online harms pose real risk to online freedom of expression

[vc_row][vc_column][vc_single_image image=”103235″ img_size=”full”][vc_column_text]The Rt Hon Jeremy Wright QC MP
Secretary of State for Digital, Culture, Media and Sport
100 Parliament Street
London SW1A 2BQ

6 March 2019

Re: Online Harms White Paper

Dear Secretary of State,

We write to you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the government’s focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, we published a joint statement noting that any proposals are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression. We have also met with your officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns. With the publication of the Online Harms White Paper imminent, we wanted to write to you personally. A number of our organisations wrote to you about this last summer, and your office kindly offered to meet us. We would be very keen to meet in person, if that offer is still open.

While we recognise and support the government’s legitimate desire to tackle unlawful and harmful content online, the proposals that have been mooted publicly by government ministers in recent months – including a new duty of care on social media platforms, a new regulatory body, and even the fining and banning of social media platforms as a sanction – have reinforced our initial concerns over the serious risks to freedom of expression online that could stem from the government’s proposals. These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European Convention on Human Rights, amongst other international treaties.

Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing. There is a real risk that the currently mooted proposals may lead to disproportionate amounts of speech being curtailed, undermining the right to freedom of expression.

Given this risk, we believe that it is essential for human rights requirements and considerations to be at the heart of the policymaking process. We urge the government to take a ‘human rights by design’ approach towards all legislation, regulation and other measures ultimately proposed. In particular, we make the following specific recommendations:

  • First, the government must set out a clear evidence base in relation to any proposals put forward in the Online Harms White Paper. The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures’ likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures proposed in the White Paper should be supported by clear and unambiguous evidence of their need and effectiveness.
  • Second, we urge the government to fully to consider non-legislative measures before opting for regulation in this field. Other potentially highly effective options such as increasing public awareness and digital literacy, a curriculum and resource focus on digital skills in schools, promoting “safety by design” amongst tech product designers and developers, and supporting existing initiatives being undertaken, should be set out in the Online Harms White Paper.
  • Third, greater transparency on the part of social media platforms and others involved in the moderation and removal of online content should be the starting point when it comes to any regulation being considered. Transparency should not simply focus on the raw number of pieces of content flagged and removed; it should instead more holistically require platforms to provide user-accessible information about the policies they have in place to respond to unlawful and harmful content, how those policies are implemented, reviewed and updated to respond to evolving situations and norms, and what company or industry-wide steps they have or are planning to improve these processes.
  • Fourth, we strongly caution against proposals which attach liability to platforms for third party content, such as a binding Code of Practice, a new ‘duty of care’ or a new regulatory body. While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.(1)
  • Fifth, we expect any legislative or regulatory proposals to contain explicit and unambiguous language on the importance of freedom of expression. It is vital that any legislative or regulatory scheme which seeks to limit speech explicitly references the human right to free expression so that this infuses how the scheme is implemented and enforced in practice. Such language should be set out both any legislation ultimately proposed, as well as any secondary legislation or regulatory guidance ultimately developed.
  • Sixth, in recognition of the UK’s commitment to the multistakeholder model of internet governance, we stress the importance for all relevant stakeholders, including civil society, to be fully engaged throughout the Online Harm White Paper’s consultation period, and able to participate in the design and implementation of any measures which are finally adopted.

We appreciate your consideration of these points and look forward to continuing our engagement with your department as the Online Harms White Paper is published and throughout the policy process.

Yours sincerely,[/vc_column_text][vc_row_inner][vc_column_inner width=”1/3″][vc_column_text]Charles Bradley
Executive Director
Global Partners Digital[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/3″][vc_column_text]Jodie Ginsberg
Chief Executive
Index on Censorship[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/3″][vc_column_text]Jim Killock
Executive Director
Open Rights Group[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_column_text]
1. See, for example, Scott, M. and Delcker, J., “Free speech vs. censorship in Germany”, Politico, 14 January 2018, available at: https://www.politico.eu/article/germany-hate-speech-netzdg-facebook-youtube-google-twitter-free-speech, and Kinstler, L., “Germany’s Attempt to Fix Facebook Is Backfiring”, The Atlantic, 18 May 2018, available at: https://www.theatlantic.com/international/archive/2018/05/germany-facebook-afd/560435/.[/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1551880941891-44b3d529-2ac3-9″ taxonomies=”16927, 4883″][/vc_column][/vc_row]

Wider definition of harm can be manipulated to restrict media freedom

[vc_row][vc_column][vc_column_text]

Index on Censorship welcomes a report by the House of Commons Digital, Culture, Media and Sport select committee into disinformation and fake news that calls for greater transparency on social media companies’ decision making processes, on who posts political advertising and on use of personal data. However, we remain concerned about attempts by government to establish systems that would regulate “harmful” content online given there remains no agreed definition of harm in this context beyond those which are already illegal.

Despite a number of reports, including the government’s Internet Safety Strategy green paper, that have examined the issue over the past year, none have yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. DCMS recognises this in its report when it quotes the Secretary of State Jeremy Wright discussing “the difficulties surrounding the definition.” Despite acknowledging this, the report’s authors nevertheless expect “technical experts” to be able to set out “what constitutes harmful content” that will be overseen by an independent regulator.

International experience shows that in practice it is extremely difficult to define harmful content in such a way that would target only “bad speech”. Last year, for example, activists in Vietnam wrote an open letter to Facebook complaining that Facebook’s system of automatically pulling content if enough people complained could “silence human rights activists and citizen journalists in Vietnam”, while Facebook has shut down the livestreams of people in the United States using the platform as a tool to document their experiences of police violence.

“It is vital that any new system created for regulating social media protects freedom of expression, rather than introducing new restrictions on speech by the back door,” said Index on Censorship chief executive Jodie Ginsberg. “We already have laws to deal with harassment, incitement to violence, and incitement to hatred. Even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public’s ability to hold the powerful to account.”

The select committee report provides the example of Germany as a country that has legislated against harmful content on tech platforms. However, it fails to mention the German Network Reinforcement Act was legislating on content that was already considered illegal, nor the widespread criticism of the law that included the UN rapporteur on freedom of expression and groups such as Human Rights Watch. It also cites the fact that one in six of Facebook’s moderators now works in Germany as “practical evidence that legislation can work.”

“The existence of more moderators is not evidence that the laws work,” said Ginsberg. “Evidence would be if more harmful content had been removed and if lawful speech flourished. Given that there is no effective mechanism for challenging decisions made by operators, it is impossible to tell how much lawful content is being removed in Germany. But the fact that Russia, Singapore and the Philippines have all cited the German law as a positive example of ways to restrict content online should give us pause.”

Index has reported on various examples of the German law being applied incorrectly, including the removal of a tweet of journalist Martin Eimermacher criticising the double standards of tabloid newspaper Bild Zeitung and the blocking of the Twitter account of German satirical magazine Titanic. The Association of German Journalists (DJV) has said the Twitter move amounted to censorship, adding it had warned of this danger when the German law was drawn up.

Index is also concerned about the continued calls for tools to distinguish between “quality journalism” and unreliable sources, most recently in the Cairncross Review. While we recognise that the ability to do this as individuals and through education is key to democracy, we are worried that a reliance on a labelling system could create false positives, and mean that smaller or newer journalism outfits would find themselves rejected by the system.

About Index on Censorship

Index on Censorship is a UK-based nonprofit that campaigns against censorship and promotes free expression worldwide. Founded in 1972, Index has published some of the world’s leading writers and artists in its award-winning quarterly magazine, including Nadine Gordimer, Mario Vargas Llosa, Samuel Beckett and Kurt Vonnegut. Index promotes debate, monitors threats to free speech and supports individuals through its annual awards and fellowship programme.

Contact: [email protected][/vc_column_text][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1550487607611-2c41f248-b775-10″ taxonomies=”6534″][/vc_column][/vc_row]