Europe’s proposed regulation on online extremism endangers freedom of expression

[vc_row][vc_column][vc_column_text]

Index on Censorship shares the widespread concerns about the proposed EU regulation on preventing the dissemination of terrorist content online. The regulation would endanger freedom of expression and would create huge practical challenges for companies and member states.

Jodie Ginsberg, CEO of Index, said “We urge members of the European Parliament and representatives of EU member states to consider if the regulation is needed at all. It risks creating far more problems than it solves. At a minimum the regulation should be completely revised.”

[/vc_column_text][vc_column_text]Following the recent agreement by the European Council on a draft position for the proposed regulation on “preventing the dissemination of terrorist content online,” which adopted the initial draft presented by the European Commission with some changes, the Global Network Initiative (GNI) is concerned about the potential unintended effects of the proposal and would therefore like to put forward a number of issues we urge the European Parliament to address as it considers it further.

GNI members recognize and appreciate the European Union (EU) and member states’ legitimate roles in providing security, and share the aim of tackling the dissemination of terrorist content online. However, we believe that, as drafted, this proposal could unintentionally undermine that shared objective by putting too much emphasis on technical measures to remove content, while simultaneously making it more difficult to challenge terrorist rhetoric with counter-narratives. In addition, the regulation as drafted may place significant pressure on a range of information and communications technology (ICT) companies to monitor users’ activities and remove content in ways that pose risks for users’ freedom of expression and privacy. We respectfully ask that EU officials, Parliamentarians, and member states take the time necessary to understand these and other significant risks that have been identified, by consulting openly and in good faith with affected companies, civil society, and other experts.

Background on the Proposal

This regulation follows previous EU efforts to reduce the proliferation of extremist content online, including the EU Internet Forum launched in December 2015 and the March 2017 directive on combating terrorismHowever, the proposed regulation would move beyond the voluntary cooperation underpinning previous initiatives and require member states to establish legal penalties against ICT companies for failure to comply with the obligations outlined in this proposal. GNI joins others who have flagged that more work is needed to analyze the effectiveness of these past approaches before the current proposal can be justified as necessary and appropriate.

This effort also come against a backdrop of separate initiatives by the European Commission to address other areas of “controversial content” online, including the May 2016 “Code of Conduct for Addressing Hate Speech Online”, the March 2018 “Recommendation on measures to effectively tackle illegal content online,” and the September 2018 “Code of Practice to address the spread of online disinformation and fake news.”

GNI’s Work on Extremist Content to Date

GNI is the world’s preeminent multistakeholder collaboration in support of freedom of expression and privacy online. GNI’s members include leading academics, civil society organizations, ICT companies, and investors from across the world. All GNI members subscribe to and support the GNI Principles on Freedom of Expression and Privacy (“the Principles”), which are drawn from widely-adopted international human rights instruments. The Principles, together with our corresponding Implementation Guidelines, create a set of expectations and recommendations for how companies should respond to government requests that could affect the freedom of expression and privacy rights of their users. The efforts of our member companies to implement these standards are assessed by our multistakeholder board every other year.

In July 2015, GNI launched a policy dialogue — which began internally, and later expanded to include external stakeholders, including the European Commission and some member states — to explore key questions and considerations about government efforts to restrict online content with the aim of protecting public safety, and to discuss the human rights implications of such government actions. In December 2016, GNI released a policy brief, “Extremist Content and the ICT Sector,” that was informed by that dialogue and included recommendations for governments and companies to protect and respect freedom of expression and privacy rights when responding to alleged extremist or terrorist content online. We refer to these recommendations as a basis to highlight the following elements in the proposed regulation which remain a potential concern.

Definitional Challenges

In our policy brief, GNI noted that laws that prohibit incitement to terrorism “should only target unlawful speech that is intended to incite the commission of a terrorist offense and that causes a danger that a terrorist offense or violent act may be committed.” While the Council’s amendments clearly try to add greater definitional clarification to the requirement for “intent,” the regulation continues to reference the definition of “terrorist content” found in Directive (EU) 2017/541, which has been deemed problematic by human rights groups and independent experts. In addition, because this definition is based in a Directive, it creates the possibility that it will be interpreted with significant variance across member states. These definitional issues are likely to lead to legal uncertainty, as well as potentially overly-aggressive interpretations by companies that could result in the removal of content that should be protected under the Charter of Fundamental Rights and Member State constitutions. Notably, and unlike the definition of terrorist offences in the Directive, the definition of terrorist content in the regulation does not clarify that content must amount to a criminal offence or be punishable under national law.

GNI notes in our extremist content brief that laws and policies should clearly distinguish between “messages that aim to incite terrorist acts and those that discuss, debate, or report them.” Because the regulation fails to make such a clear distinction, it will pose particular risks to the legitimate expression of journalists and researchers working on documenting terrorist abuses. It may also, unintentionally, impact those working on counter-terrorism efforts, including those trying to use arguments based in humor, satire, or religious doctrines to engage in counter-messaging or counter-narrative efforts.

Removal Orders

The proposal allows designated “competent authorities” to issue removal orders to companies requiring they remove terrorist content, deemed illegal under the proposed regulation, within one hour from receipt of the order. As noted in our policy brief, GNI members are expected to “interpret government restrictions and demands, as well as governmental authority’s jurisdiction, so as to minimize the negative effects on freedom of expression.” The rapid timeline prescribed potentially creates significant challenges for appropriate review of removal orders.

In addition, the potentially significant legal penalties for noncompliance will put increased pressure on companies to comply with these orders. While we appreciate the provisions, particularly in the Council’s amendments, allowing for companies to appeal such orders to the judicial authority of the member state that issued the request, it is not clear that this appeal delays the timeline for removal. If content is removed, the amount of time it can take for appropriate redress to take place and for content to be reinstated poses substantial freedom of expression risks.

Finally, GNI members have also worked extensively to understand and address the jurisdictional challenges that emerge when governments make orders that end up being enforced through or having impacts on other jurisdictions. While the provision for a “consultation procedure” added by the Council is helpful, the proposal still creates significant potential for conflicts of laws to emerge, which would add to the aforementioned lack of legal clarity. 

Referrals

The proposed regulation would allow member states to establish “competent authorities” to issue referrals of content “that may be considered terrorist” for review by companies under their own terms and conditions, and if appropriate, removal. As others have noted, it is not clear competent authorities or member states are expected to have already determined that the content is illegal under national law prior to submission of a referral. The law also requires companies to establish “operational and technical measures facilitating expeditious assessment of content sent by competent authorities.”

In “Extremist Content and the ICT Sector,” we raised concerns about the potential for this type of referral to “set precedents for extra-judicial government censorship without adequate access to remedy, accountability, or transparency for users and the public.” GNI has called on governments to use formally established legal procedures when they demand the restriction of content by ICT companies, to adopt additional safeguards, and to be clear about whether they are issuing referrals or issuing legal orders, and it would appear that these referrals do not meet that standard. It is also unclear if there would or could be any independent, judicial oversight of this mechanism, and yet the proposal notes that lack of expeditious response could lead to the implementation of proactive measures or even legal penalties.

Duties of Care/Proactive Measures

GNI noted in “Extremist Content and the ICT Sector” that governments should not pressure companies to change their terms of service. Yet, Article Three of the proposal establishes “duties of care” whereby companies are expected to undertake reasonable and proportionate actions in accordance with the regulation for the removal of extremist content on their platforms, and furthermore, are expected to “include, and apply, in their terms and conditions provisions to prevent the dissemination of terrorist content.”

Beyond these “duties of care,” the proposed regulation also outlines an expectation for companies to undertake “effective and proportionate” “proactive measures to protect their services against the dissemination of content,” including through automated means. While the Council’s position notes that this requirement is “depending on the risk and level of exposure to terrorist content,” this fails to clarify if, when, and how companies should take such measures. Should a company receive a removal order under Article Four, they are required to implement these proactive measures, both to prevent re-upload of the content that was identified in a previous removal order, and on terrorist content more broadly, reporting back to the competent authority within three months about the proactive measures in place for “detecting, identifying, and expeditiously removing or disabling access to terrorist content.”

This aspect of the proposal poses an increased risk on the right to privacy, in so far as it calls on companies to proactively monitor and filter their users’ communications. Furthermore, as the proposal acknowledges, it “could derogate” from the provisions against a “general monitoring obligation” in Article 15 of the e-Commerce Directive, “as regards certain specific, targeted measures, the adoption of which is necessary for overriding public security reasons” (see recital 19). Finally, it is important to recognize the potential freedom of expression risks that come from a reliance on automated filtering measures. Limitations on the effectiveness of existing technology to search, analyze, and filter content online are often underappreciated and can lead to over-removal of legitimate content.

Transparency & Redress

GNI would like to emphasize the critical need to ensure adequate redress and transparency measures are in place throughout the various elements of this proposal. The proposal clarifies requirements for companies to make available the reason for content removals, as well as avenues for content providers to contest the decision. However, there are no similar requirements for the competent authorities, and as previously noted, any appeals to member states’ judicial authorities under Article Four do not necessarily delay the timeline for decision. In sum, the proposal’s repeated reference to users’ right to remedy and the provisions on redress do not seem to be matched by specific guidance and effective implementation.

In our policy brief, GNI members flagged that “governments should regularly and publicly report, at a minimum, the aggregate numbers of requests and/or legal orders made to companies to restrict content and the number of users impacted by these requests,” and with regards to the requests for removal made under companies’ terms of service, “Governments should regularly and publicly report, at a minimum, the aggregate number of requests made to companies to restrict content and the number of users impacted by these requests.” While we appreciate that the transparency obligations for companies under Article Eight include a requirement to report on the “number of pieces of terrorist content removed or to which access has been disabled, following removal orders, referrals, or proactive measures, respectively,” there is little in the way of similar requirements for governments anywhere in the proposal. Under the current proposal, the obligatory government reporting only appears to apply for the purposes of assessing the proposal’s implementation and effectiveness, not for providing transparency for users and the general public. 

Practical Issues

GNI would also like to flag some potential ambiguities in implementation that pose risks for users’ rights. First, there is very limited guidance for member states’ designation of competent authorities who carry out the provisions under Articles Four, Five, and Six. While Article 17 states that all member states must designate a competent authority or competent authorities and notify the Commission, the only requirements seem to be that competent authorities are “administrative, law enforcement or judicial authorities” (see recital 13). Furthermore, states are able to designate multiple competent authorities, which could cause confusion for companies receiving requests. Several companies have stated that the member states should be required to establish a single authority, which would seem a reasonable request.

Second, there are stringent requirements for companies to establish legal representatives to “ensure compliance with enforcement of the obligations under this regulation” (See recital 35), as well as points of contact to “facilitate the swift handling of removal orders and referrals,” including the one-hour timeline. In addition, the definition of “hosting service providers” in the regulation has been criticized for its lack of clarity as to what companies are covered under the proposal. In combination, these two issues pose potential challenges for smaller and medium-sized enterprises, who may not have the existing infrastructure to deal with the rapid, 24/7, requests and properly assess the potential human rights impacts, or may be discouraged from potential business opportunities at the cost of compliance with this regulation.

Conclusion

As noted above, the proposed regulation raises significant issues that must be addressed before it is enacted into law. At a minimum, amendments should: (i) ensure key provisions, such as the definitions of terrorist content, hosting service providers, and competent authorities are refined and clarified; (ii) clarify that legal challenges of content removal orders by companies will toll the 24-hour clock for related removals; (iii) require that content referrals under the regulation are reviewed against relevant laws and appropriate oversight mechanisms are in place for referrals; (iv) remove requirements that companies modify their terms and conditions; (v) eliminate, or significantly limit, situations where companies will be ordered or expected to implement “proactive measures” against their will; and (vi) strengthen provisions on remedy and transparency, including vis-a-vis government decisions.

GNI recognizes the importance of taking measures to prevent the dissemination of terrorist content online and stands ready to continue engaging with relevant actors, including the Council, the Commission, and Parliament to ensure that our collective efforts to address this challenge remain effective, efficient, and consistent with applicable human rights principles.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1547568745346-e52becbf-149c-9″ taxonomies=”4799, 2181″][/vc_column][/vc_row]

Global Network Initiative addresses global delisting case

[vc_row][vc_column][vc_column_text]

 The Global Network Initiative, of Index on Censorship is a participant, notes the decision by the French courts to refer the global internet search de-listing case to the Court of Justice of the European Union.

” This important case raises complex issues related to internationally protected rights to freedom of expression and privacy, and the ability of governments to assert jurisdiction beyond borders. We hope the Court will take the opportunity to carefully the consequences for human rights – not just in Europe, but around the world,” said Mark Stephens, CBE., GNI Independent Board Chair and international human rights lawyer.

“We are concerned that if a single jurisdiction can mandate the global removal of search information it sends a message to all governments – authoritarian and democratic – that they each can reach beyond their borders and restrict access to content which is perfectly lawful in other jurisdictions,” Mr. Stephens said.

“The unintended consequences for global delisting include countries passing laws that restrict global access to information such as criticism of leaders and governments, and content relating to religious and ethnic minorities, LGBT people and women’s health, ” he said.

In March 2016, Google appealed the ruling of the Commission Nationale de L’Informatique et des Libertes (CNIL), which requires that search results deemed subject to the “Right to be Forgotten” be blocked not just across the European Union, but globally.

GNI has been long concerned that a global de-listing mandate sets a disturbing precedent for the cause of an open and free internet, with consequences for global access to information and freedom of expression, including for journalists, academics and historians.

Index on Censorship has and will remain opposed to calls for global delisting of search results, calling the so-called right to be forgotten “a blunt instrument ruling that opens the door for widespread censorship”.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”12″ style=”load-more” items_per_page=”4″ element_width=”6″ grid_id=”vc_gid:1500650961804-f6170a94-c2b4-2″ taxonomies=”3211″][/vc_column][/vc_row]

GNI welcomes appeal to the global reach of “the right to be forgotten”

The Global Network Initiative welcomes the announcement that Google is appealing a French data protection authority ruling requiring the global take down of links to search information banned in France under Europe’s “right to be forgotten”.

We are concerned that the ruling, made by Commission Nationale de L’Informatique et des Libertes (CNIL) in March, sets a disturbing precedent for the cause of an open and free Internet, and sends the message to other countries that they can force the banning of search results not just inside their own jurisdictions, but assert that jurisdiction across the globe.

Google began delisting search content in response to the Costeja ruling in July of 2014. Search links that are delisted in response to French citizens’ requests are removed from the local French domain (google.fr) as well as all of Europe. In early 2016 the company announced that it would further restrict access to links delisted in Europe by using geolocation technology to restrict access to the content on any Google Search domain when an individual searches from France. Despite this, the French authorities continue to demand global removal of these links from all Google search domains – regardless of from where in the world they are accessed.

“We are concerned about the impact of the CNIL order, which effectively allows the government of one country to dictate what the rest of the world is allowed to access online,” said GNI Board Chair Mark Stephens, CBE. “Enshrined in international law is the principle that one country cannot infringe upon the rights of citizens of another country,” he said.

Online search engines and intermediaries are vital tools to inform public discourse, hold the powerful to account, and highlight injustice.

“The right of academics, journalists, historians and all citizens to access complete and uncensored information is the bedrock of civic participation and a free society,” said GNI Executive Director, Judith Lichtenberg.

“This ruling could set the stage for a global internet where the most censored and repressive societies will effectively dictate the standard for all humanity,” Mr Stephens said.

It is highly problematic that the authorities in one country should be able to force the global removal of search information that even if deemed inadequate, inaccurate or irrelevant under the criteria of the Costeja ruling, is arguably still lawful, and is publicly available in other countries. That same information could also be the subject of legal protections in other countries. This includes laws that criminalize the criticism of leaders and governments and laws that ban content pertaining to religious or ethnic minorities, LGBT people, or relating to women’s health.

Previous statements from GNI about the implications of the global enforcement of the ‘right to be forgotten’, can be found on  website.

Are India’s internet laws ready for the digital age?

The Global Network Initiative (GNI) and the Internet and Mobile Association of India (IAMA) have launched an interactive slide show exploring how India’’s internet and technology laws are holding back economic innovation and freedom of expression.

India, which represents the third largest population of internet users in the world, is at crossroads: while the country protects free speech in its constitution, restrictive laws have undermined India’s record on freedom of expression.

Constraints on digital freedom have caused much controversy and debate in India, and some of the biggest web host companies, such as Google, Yahoo and Facebook, have faced court cases and criminal charges for failing to remove what is deemed “objectionable” content. The main threat to free expression online in India stems from specific laws: most notorious among them the 2000 Information Technology Act (IT Act) and its post-Mumbai attack amendments in 2008 that introduced new regulations around offence and national security.

In November 2013, Index launched a report exploring the main challenges and threats to online freedom of expression in India, including takedown, filtering and blocking policies, and the criminalisation of online speech.

This article was posted on Aug 1, 2014 at indexoncensorship.org