The UK government’s online harms white paper: implications for freedom of expression

[vc_row][vc_column][vc_column_text]

Recommendations

  • Parliament must be fully involved in shaping the government’s proposals for online regulation as the proposals have the potential to cause large-scale impacts on freedom of expression and other rights.
  • The proposed duty of care needs to be limited and defined in a way that addresses the risk that it will create a strong incentive for companies and others to censor legal content, especially if combined with fines and personal liability for senior managers.
  • It is important to widen the focus from harms and what individual users do online to the structural and systemic issues in the architecture of the online world. For example, much greater transparency is needed about how algorithms influence what a user sees.
  • The government is aiming to work with other countries to build international consensus behind the proposals in the white paper. This makes it particularly important that the UK’s plans for online regulation meet international human rights standards. Parliament should ensure that the proposals are scrutinised for compatibility with the UK’s international obligations.
  • More scrutiny is needed regarding the implications of the proposals for media freedom, as “harmful” news stories risk being caught.

 

Introduction

The proposals in the government’s online harms white paper risk damaging freedom of expression in the UK, and abroad if other countries follow the UK’s example.

  • A proposed new statutory duty of care to tackle online “harms” combined with substantial fines and possibly even personal criminal liability for senior managers would create a strong incentive for companies to remove content.
  • The “harms” are not clearly defined but include activities and materials that are legal.
  • Even the smallest companies and non-profit organisations are covered, as are public discussion forums and file sharing sites.

The proposals come less than two months after the widely criticised Counter-Terrorism and Border Security Act 2019. The act contains severe limitations on freedom of expression and access to information online (see Index report for more information).

 

The duty of care: a strong incentive to censor online content

The proposed new statutory duty of care to tackle online harms,  combined with the possibility of substantial fines and possibly even personal criminal liability for senior managers, risks creating a strong incentive to restrict and remove online content.

Will Perrin and Lorna Woods, who have developed the online duty of care concept, envisage that the duty will be implemented by applying the “precautionary principle” which would allow a future regulator to “act on emerging evidence”.  

Guidance by the UK Interdepartmental Liaison Group on Risk Assessment (UK-ILGRA) states:

“The purpose of the Precautionary Principle is to create an impetus to take a decision notwithstanding scientific uncertainty about the nature and extent of the risk, i.e. to avoid ‘paralysis by analysis’ by removing excuses for inaction on the grounds of scientific uncertainty.”

The guidance makes sense when addressing issues such as environmental pollution, but applying it in a context where freedom of expression is at stake risks legitimising censorship – a very dangerous step to take.

 

Not just large companies

The duty of care would cover companies of all sizes, social media companies, public discussion forums, retailers that allow users to review products online, non-profit organisations (for example, Index on Censorship), file sharing sites and cloud hosting providers. A blog and comments would be included, as would shared Google documents.

The proposed new regulator is supposed to take a  “proportionate” approach, which would take into account companies’ size and capacity, but it is unclear what this would mean in practice.

 

Censoring legal “harms”

The white paper lists a wide range of harms, for example, terrorist content, extremist content, child sexual exploitation, organised immigration crime, modern slavery, content illegally uploaded from prisons, cyberbullying, disinformation, coercive behaviour, intimidation, under 18s using dating apps and excessive screen time.  

The harms are divided into three groups: harms with a clear definition; harms with a less clear definition; and underage exposure to legal content.  Activities and materials that are not illegal are explicitly included. This would create a double standard, where activities and materials that are legal offline would effectively become illegal online.

The focus on the catch-all term of “harms” tends to oversimplify the issues. For example, the recent study by Ofcom and the Information Commissioner’s Office Online Nation found that 61% of adults had a potentially harmful experience online in the last 12 months. However, this included “mildly annoying” experiences. Not all harms need a legislative response.

 

A new regulator

The white paper proposes the establishment of an independent regulator for online safety, which could be a new or existing body. It mentions the possibility of an existing regulator, possibly Ofcom, taking on the role for an interim period to allow time to establish a new regulatory body.

The future regulator would have a daunting task. It would include defining what companies (and presumably also others covered by the proposed duty of care) would need to do to fulfil the duty of care, establishing a “transparency, trust and accountability framework” to assess compliance and taking enforcement action as needed.

The regulator would be expected to develop codes of practice setting out in detail what companies need to do to fulfil the duty of care. If a company chose not to follow a particular code it would need to justify how its own approach meets the same standard as the code. The government would have the power to direct the regulator in relation to codes of practice on terrorist content and child sexual exploitation and abuse.

 

Enforcement

The new enforcement powers outlined in the white paper will include substantial fines. The government is inviting consultation responses on a list of possible further enforcement measures. These include disruption of business activities (for example, forcing third-party companies to withdraw services), ISP blocking (making a platform inaccessible from the UK) and creating a new liability for individual senior managers, which could involve personal liability for civil fines or could even extend to criminal liability.

 

Undermining media freedom

The proposals in the white paper pose a serious risk to media freedom. Culture Secretary Jeremy Wright has written to the Society of Editors in response to concerns, but many remain unconvinced.

As noted the proposed duty of care would cover a very broad range of “harms”, including disinformation and violent content. In combination with fines and potentially even personal criminal liability, this would create a strong incentive for platforms to remove content proactively, including news that might be considered “harmful”.

Index has filed an official alert about the threat to media freedom with the Council of Europe’s Platform to promote the protection of journalism and safety of journalists. Index and the Association of European Journalists (AEJ) have made a statement about the lack of detail in the UK’s reply to the alert. At the time of writing the UK has not provided a more detailed reply.

 

Censorship and monitoring

The European Union’s e-commerce directive is the basis for the current liability rules related to online content. The directive shields online platforms from liability for illegal content that users upload unless the platform is aware of the content. The directive also prohibits general monitoring of what people upload or transmit.

The white paper states that the government’s aim is to increase this responsibility and that the government will introduce specific monitoring requirements for some categories of illegal content. This gets close to dangerous censorship territory and it is doubtful if it could be compatible with the e-commerce directive.

Restrictions on freedom of expression and access to information are extremely serious measures and should be backed by strong evidence that they are necessary and will serve an important purpose. Under international law freedom of expression can only be restricted in certain limited circumstances for specific reasons. It is far from clear that the proposals set out in the white paper would meet international standards.

 

Freedom of expression – not a high priority

The white paper gives far too little attention to freedom of expression. The proposed regulator would have a specific legal obligation to pay due regard to innovation. When it comes to freedom of expression the paper only refers to an obligation to protect users’ rights “particularly rights to privacy and freedom of expression”.  

It is surprising and disappointing that the white paper, which sets out measures with far-reaching potential to interfere with freedom of expression, does not contain a strong and unambiguous commitment to safeguarding this right.

 

Contact: Joy Hyvarinen, Head of Advocacy, [email protected][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560957390488-02865151-710e-1″ taxonomies=”4883″][/vc_column][/vc_row]

Stifling free speech online in the war on fake news

[vc_row][vc_column][vc_single_image image=”97329″ img_size=”full” add_caption=”yes” alignment=”right”][vc_column_text]“Fake news”. The phrase emerged only a matter of years ago to become familiar to everybody. The moral panic around fake news has grown so rapidly that it became a common talking point. In its short life it has been dubbed the Collins Dictionary’s word of 2017 and the Bulletin of Atomic Scientists say it was one of the driving factors that made them set their symbolic Doomsday Clock to two minutes from midnight in 2019. It is a talking point on the lips of academics, media pundits and politicians.

For many, it is feared that “fake news” could lead to the end of democratic society, clouding our ability to think critically about important issues. Yet the febrile atmosphere surrounding it has led to legislation around the world which could potentially harm free expression far more than the conspiracy theories being peddled.

In Russia and Singapore politicians have taken steps to legislate against the risk of “fake news” online. A report published in April 2019 by the Department of Digital, Culture, Media and Sport could lead to stronger restrictions on free expression on the internet in the UK.

The Online Harms White Paper proposes ways in which the government can combat what are deemed to be harmful online activities. However, while some the harmful activities specified — such as terrorism and child abuse — fall within the government’s scope, the paper also declares various unclearly defined practices such as “disinformation” as under scrutiny.

Internet regulation would be enforced by a new independent regulatory body, similar to Ofcom, which currently regulates broadcasts on UK television and radio. Websites would be expected to conform to the regulations set by the body.

According to Jeremy Wright, the UK’s Secretary of State for Digital, Culture, Media and Sport, the intention is that this body will have “sufficient teeth to hold companies to account when they are judged to have breached their statutory duty of care”.

“This will include the power to issue remedial notices and substantial fines,” he says, “and we will consult on even more stringent sanctions, including senior management liability and the blocking of websites.​”

According to Sharon White, the chief executive of the UK’s media regulatory body Ofcom, the term “fake news” is problematic because it “is bandied around with no clear idea of what it means, or agreed definition. The term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader.”  The UK government prefers to use the term “disinformation”, which it defines as “information which is created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain”.

However, the difficulty of proving that false information was published with an intention to cause harm could potentially affect websites which publish honestly held opinions or satirical content.

As a concept, “fake news” is frequently prone to bleeding beyond the boundaries of any attempt to define it. Indeed, for many politicians, that is not only the nature of the phrase but the entire point of it.

“Fake news” has become a tool for politicians to discredit voices which oppose them. Although the phrase may have been popularised by US President Donald Trump to attack his critics, the idea of “fake news” has since become adopted by authoritarian regimes worldwide as a justification to deliberately silence opposition.

As late US Senator John McCain wrote in a piece for The Washington Post: “the phrase ‘fake news’ — granted legitimacy by an American president — is being used by autocrats to silence reporters, undermine political opponents, stave off media scrutiny and mislead citizens.

“This assault on journalism and free speech proceeds apace in places such as Russia, Turkey, China, Egypt, Venezuela and many others. Yet even more troubling is the growing number of attacks on press freedom in traditionally free and open societies, where censorship in the name of national security is becoming more common.”

In Singapore — a country ranked by Reporters Without Borders as 151 out of 180 nations for press freedom in 2019 — a bill was introduced to parliament ostensibly intended to combat fake news.

Singapore’s Protection from Online Falsehoods and Manipulation Bill would permit government ministers to order the correction or removal of online content which is deemed to be false. It is justified under very broad, tautological definitions which state amongst other things that “a falsehood is a statement of fact that is false or misleading”. On this basis, members of the Singaporean government could easily use this law to censor any articles, memes, videos, photographs or advertising that offends them personally, or is seen to impair the government’s authority.

In addition to more conventional definitions of public interest, the term is defined in the bill as including anything which “could be prejudicial to the friendly relations of Singapore with other countries.” The end result is that Singaporeans could potentially be charged not only for criticising their own government, but Singapore’s allies as well.

Marte Hellema, communications and media programme manager for the human rights organisation FORUM-ASIA explains her organisation’s concerns: “We are seriously concerned that the bill is primarily intended to repress freedom of expression and silence dissent in Singapore.”

Hellema pointed out that the law would be in clear violation of international human rights standards and criticised its use of vague terms and lack of definitions.

“Combined with intrusive measures such as the power to impose heavy penalties for violations and order internet services to disable content, authorities will have the ability to curtail the human rights and fundamental freedoms of anyone who criticises the government, particularly human rights defenders and media,” Hellema says.

In Russia, some of the most repressive legislation to come out of the wave of talk about “fake news” was signed into law earlier this year.

In March 2019, the Russian parliament passed two amendments to existing data legislation to combat fake news on the internet.

The laws censor online content which is deemed to be “fake news” according to the government, or which “exhibits blatant disrespect for the society, government, official government symbols, constitution or governmental bodies of the Russian Federation”.

Online news outlets and users which repeatedly run afoul of the laws will face fines of up to 1.5 million roubles (£17,803) for being seen to have published “unreliable” information.

Additionally, individuals who have been accused of specifically criticising the state, the law or the symbols which represent them risk further fines of 300,000 roubles (£3,560) or even prison sentences.

The move has been criticised by public figures and activists, who see the new laws as an attempt to stifle public criticism of the government and increase control over the internet. The policy is regarded as a continuation of previous legislation in Russia designed to suppress online anonymity and blacklist undesirable websites.[/vc_column_text][/vc_column][/vc_row]

No sex please, we’re British: Why is no one talking about the UK’s impending “porn ban”?

[vc_row][vc_column][vc_column_text]

There is a tired stereotype that the British don’t “do” sex. It’s too embarrassing, too shocking to be talked about in public spaces. The cliché has long been considered outdated in the wake of the obscenity trials of the mid-1960s when works such as Lady Chatterley’s Lover were vindicated. And, so the story goes, the road has paved the way to a more permissive society.

Yet with the passage of the UK’s Digital Economy Act 2017, there could all be about to change. The act, created by the Department of Digital, Culture, Media and Sport, mandates — among other things — that all pornographic websites have strong age verification checks. Ostensibly created to safeguard children from pornography, these checks would require users to either purchase a special “porn pass” in a physical shop or submit official forms of documentation to private companies for verification.

This age verification scheme has been repeatedly delayed, most recently on 1 April 2019. However, it has not been taken off the table, with the government still working to roll out the scheme sometime in the near future. This is despite the policy having been met with criticism by many anti-censorship and privacy campaigners, who see it as setting an unhealthy risk to online anonymity, net neutrality and sexual freedom.

The law has the potential to pose a severe threat to the anonymity of people in the UK.

Jodie Ginsberg, CEO of Index on Censorship, explains: “This plan is riddled with problems. As David Kaye, the UN’s Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has said, identity disclosure requirements in law allow authorities to identify people more easily, which threatens anonymous expression”.

David Kaye has long been a vocal critic of violations of net neutrality such as the UK’s “porn ban”, which can have a severe cooling effect on the public’s ability to express themselves freely.

He states that: “The internet has profound value for freedom of opinion and expression, as it magnifies the voice and multiplies the information within reach of everyone who has access to it. Within a brief period, it has become the central global public forum. As such, an open and secure internet should be counted among the leading prerequisites for the enjoyment of freedom of expression today.”

According to Kaye, any requirement for the disclosure of identity or obtain an officially recognised documentation to access pornography would allow authorities to identify people more easily, and threatens that kind of anonymous expression. Similarly, creating legally sanctioned barriers which severely restrict access to online content such as pornography would be harmful to individuals’ right to expression.

The law will be overseen by the British Board of Film Classification, the 107-year-old organisation responsible for age ratings on films, which among other things rated Watership Down — a 1978 British animated adventure-drama film complete replete with violent scenes of rabbits ripping each other’s throats out —  “U” for universal, or suitable for all. However, the actual enforcement of it will be left to private “age verification” companies.

The systems currently being prepared for use after the scheme is rolled out by companies such as AgeID and AgeChecked typically require the user to provide formal, high-risk personal identification such as a passport, driver’s licence or credit card through a third party. Once sufficient identification is provided, users will be granted access to all websites that use them. Other information such as names, addresses and bank details may also be required, but no matter how well encrypted, the storing of this data comes with a risk.

Yet with AgeID — the company expected to dominate the UK age verification market — being owned by pornography giant MindGeek, this risk may only be enhanced. MindGeek is also the parent company of PornHub, YouPorn and RedTube, and may represent a strong conflict of interest.

As Jim Killock, executive director of the Open Rights Group explains: “The porn company MindGeek will become the Facebook of age verification, dominating the UK market. They would then decide what privacy risks or profiling take place for the vast majority of UK citizens.”

“The government has repeatedly refused to ensure that there is a legal duty for age verification providers to protect the privacy of web users,” Killock adds. “Age verification could lead to porn companies building databases of the UK’s porn habits, which could be vulnerable to Ashley Madison style hacks.”

The risks of these databases being hacked is a distinct possibility. The 2015 Ashley Madison data breach, in which hackers exposed the names, email addresses, phone numbers and credit card details of users of the dating site whose slogan was “Life is short. Have an affair”.

Subsequent to the breach, exposed users found themselves at risk of losing careers, broken marriages and extortion or suicide, as in the case of John Gibson, a pastor at the New Orleans Baptist Theological Seminary whose suicide note mentioned the data breach.

For one vocal critic, Clarissa Smith, professor of sexual cultures at the University of Sunderland, the law seems like an ineffective attempt for the government to appear to be addressing a largely manufactured issue. She argues that the resulting law ultimately doesn’t even address the underlying problems which it claims to address.

“As is often the case with this kind of legislation, it was rolled into a portmanteau bill, and the age verification provisions were barely discussed because they’re just one aspect of the bill,” Smith said. The legislation then passed, and most people are unaware that the provisions were even there. Those that do take note are often dismissed as the cranks or perverts ‘who don’t care about children’.”

“The result is that broader problems about data security, rights to privacy and sexual freedoms are all swept away in the name of ‘doing something for the kids’,” she added. “And people ignore this kind of legislation because they presume it won’t apply to their own sex lives, or that if they don’t watch much porn they won’t be affected.”

The issue, Smith argues, is not that pornography is accessible, but that it is not talked about with young people, except to discourage them from seeking it out.

“I think there are lots of more productive ways of dealing with adult concerns about what young people are viewing online. Too often they turn straight to protection, prevention and prohibition when what could be done would be to offer education and support so that kids make the right choices for themselves.”

According to Jerry Barnett, campaigner and author of Porn Panic!: Sex and Censorship in the UK, the regulation of the porn industry is a sign of greater underlying threats to net neutrality. For Barnett, the problems with the age verification system, though still significant, pale in comparison to the government’s power to block porn websites outright if they don’t comply with the law. He says that allowing the government to block non-compliant porn sites could set a dangerous precedent for other forms of online censorship in the future.

“While the discussion has centred on the ‘age verification’ requirement, the real problem comes with the blocking system that will be used to prevent UK citizens accessing non-compliant sites,” Barnett says. “That should be the focus of civil liberties campaigners, as it can be used to block anything.”

“The burning issue is that a ‘Great Firewall of Britain’ has been quietly built using the age verification requirements as an excuse, he adds. “The blocking system, rather than the age verification system, will fundamentally change the nature of the internet.”

The Digital Economy Act permits the BBFC to block porn websites which do not comply with their regulations, as well as allowing them to impose fines of up to £250,000 upon them as an alternative.

Barnett argues that if the law had not been specifically targeted at pornography, a law such as the Digital Economy Act would have been the subject of strong public debate, such as the recent discussion surrounding Article 13, the EU’s new copyright directive. Like Smith, he suggests that such debate struggled in the face of the strong cultural stigma which surrounds sex and pornography.

“The issue requires someone who will ‘defend the indefensible’ in the media, and there aren’t many of us around,” he says. “In order to oppose censorship, you find yourself defending things — such as the right of teenagers to access sexual content — that few people are prepared to defend.”

Despite the many issues which plague the law, the vast majority of the UK population is apparently unaware of it. According to a poll conducted by YouGov published on 18 March, less than a month before the new regulations were due to be rolled out, only 24% of Britons were aware of the new law- or what it entailed.

The same study found that 67% of those surveyed supported the changes after being informed of the law — including over a quarter of those who viewed pornography most frequently.

Seemingly, even many people who in private would be hit hardest by the new measures support them due to the social stigma of being opposed to them.

Of course, one of the largest problems which the government will face if the legislation ever becomes officially enforced — and a likely reason why the law has been repeatedly delayed —  are virtual private networks such as NordVPN, HideMyAss! and Cyberghost, which allow users to browse the internet anonymously, and even appear to be doing so from a country completely different to their actual location.

With the UK being the only democratic country to so far attempt to block pornography, the “Great Firewall” is likely to be one with many holes in it.

This article was updated on 2 May 2019 to clarify language around age verification systems.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1556793072006-c7e3356d-4e49-5″ taxonomies=”4883″][/vc_column][/vc_row]

Online harms proposals pose serious risks to freedom of expression

[vc_row][vc_column][vc_column_text]Index on Censorship has raised strong concerns about the government’s focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, Index published a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression.

We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.

With the publication of the Online Harms White Paper, we would like to reiterate our earlier points.

While we recognise the government’s desire to tackle unlawful content online, the proposals mooted in the white paper – including a new duty of care on social media platforms, a regulatory body, and even the fining and banning of social media platforms as a sanction – pose serious risks to freedom of expression online.

These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European Convention on Human Rights, amongst other international treaties.

Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing. The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining the right to freedom of expression.

In particular, we raise the following concerns related to the white paper:

  1. Lack of evidence base

The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures’ likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of their need and effectiveness.

  1. Duty of care concerns/ problems with ‘harm’ definition

Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation – and restriction – of speech between individuals based on criteria that is far broader than current law. A failure to accurately define “harmful” content risks incorporating legal speech, including political expression, expressions of religious views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.

  1. Risks in linking liability/sanctions to platforms over third party content

While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.

  1. Lack of sufficient protections for freedom of expression.

The obligation to protect users’ rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future regulation.

In recognition of the UK’s commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully engaged throughout the development of the Online Harms bill.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1560957277247-e6faa48c-4eca-0″ taxonomies=”15847″][/vc_column][/vc_row]

SUPPORT INDEX'S WORK