The fight against harmful content

Arthur Piper, IBA Technology Correspondent Wednesday 13 July 2022

Attempts to protect against the obvious dangers of some online content are long overdue – but far from straightforward.

The UK’s plans to significantly toughen its stance on illegal and harmful online content have been gradually moving through the legislative process. The government’s Online Safety Bill, introduced in March and now set for delay and further consideration later in the year due to the change in leadership of the UK government, is the first effort to codify specific laws that would govern what content social media platforms and search engines can serve up to their users.

Currently in the UK and elsewhere, such social technology companies mainly have reactive duties: they must remove content that has been posted when they become aware it’s illegal. Under the UK’s Electronic Commerce (EC Directive) Regulations 2002, they don’t have to monitor anything shared or uploaded on their platforms – although in practice many have chosen to do so.

Duty of care - or not?

Most significantly, the government says the Online Safety Bill introduces for the first time a statutory duty of care for platforms that curate user-generated content. This represents a major shift in the legal framework for social technology companies operating in the UK – and is reminiscent in its extraterritorial scope of the EU General Data Protection Regulation (GDPR) 2018. Although the US Congress is currently considering a bill that could become the Kids Online Safety Act, if passed, the UK laws will hit US platforms operating in the UK just as the GDPR-affected data collected in Europe but processed by US enterprises.

While this suggests the introduction of a duty of care – a concept borrowed from the tort of negligence, such as harm caused by careless driving – experts say the UK Bill does no such thing. Instead, the Bill sets out online safety objectives in its Schedule 4. David Barker, a partner at Pinsent Masons, says those objectives could be turned into a concise set of rules that organisations must follow. That would effectively mirror the approach taken by the GDPR, but Barker is one of many critics who say that the precise nature of those duties is too vaguely drawn.

Perhaps the mood music on the censorship of specifically illegal content on both sides of the Atlantic is beginning to change

Nevertheless, all relevant businesses operating in the UK will have to address illegal content if the Bill is passed into law, with a particular focus on that which could harm adults and children, and/or be related to terrorism, drugs, weapons and sexual misconduct. In the first instance, organisations must prevent users from seeing or interacting with such material and be willing and able to take it down quickly. They’ll need to adjust their technology processes to identify, manage and mitigate the risk of harm in these areas, and set out clearly in their terms of service how users will be protected.

This shift to getting systems right is another major change, which US academics such as Mark MacCarthy, a non-resident senior fellow in Governance Studies at the Center for Technology Innovation in Washington, DC, have praised. He points out that it helps sidestep the long-disputed idea that online platforms are essentially publishers, despite the fact they have little say over user-generated content. Having systems in place that effectively police illegal content based on published and regulated terms and services creates a unique communications category for social technology platforms.

As Global Insight reported late last year (see ‘Facebook, Meta and the power of tech’), regulators and companies such as Facebook had been looking to amend Section 203 of the US Communications Decency Act 1996. That legislation gives safe harbour to social media platforms against legal liability for content users post, but does not mandate access for regulators to see how those algorithms work. The UK’s approach would open up those systems but, surprisingly in the UK, not to academics and independent public interest bodies – a potential missed opportunity for greater transparency and scrutiny.

The UK government has appointed Ofcom to act as regulator to the regime, and the Bill gives it powers to enforce compliance with potentially eye-watering fines of up to ten per cent of the business’ global annual turnover. Specifically, Ofcom is empowered to gather the information it needs from businesses to ensure compliance – and will be able to force them to install ‘proactive technologies to identify illegal content and ensure children aren’t encountering harmful material’, according to the draft rules. That sweeps away today’s reactive approach in favour of a more onerous, proactive one.

But the UK Bill is large, complex and unwieldy. Even if it were passed in full, it would need secondary legislation to make it work in crucial areas. For example, the Bill uses categories 1, 2A and 2B to distinguish between different types of provision (generally based on how many users the business has and what services it offers), but the thresholds designed to sort organisations into these groups for Ofcom’s scrutiny aren’t set to be in the Bill. That leaves many companies in the dark and unable to prepare for the legislation. techUK, a technology trade association, has said it’s ‘disappointed’ not to have further clarity in this area.

In addition, the Bill introduces the ability of the regulator to police ‘legal but harmful’ content. As it stands, the Secretary of State with Ofcom will be left to define specifically what ‘harm’ is since the Bill merely says it includes content ‘of a kind which presents a material risk of significant harm to an appreciable number of adults in the United Kingdom’.

A similar provision exists for content aimed at children. But adding such a subjective measure in a bill aimed at regulating through clearly defined rules and systems threatens to pitch the sector into a miasma of dispute.

Free speech

Civil rights groups have complained that such vague wording represents the potential for a serious curtailment of free speech. ‘It is near impossible for users to understand whether and how their speech could cause harm, in particular if they do not know their audience’, said the civil liberties group Article 19. ‘Coupled with the heavy sanctions regime the Bill is seeking to introduce, it is expected that companies will opt for a zero-risk strategy and generously remove vast amounts of legal content if there is even the slightest chance that it could be considered harmful to what is an undefined number of adults with subjective sensitivities.’

But perhaps the mood music on the censorship of specifically illegal content on both sides of the Atlantic is beginning to change. The US entrepreneur Elon Musk announced he was pulling out of his planned takeover of Twitter in mid-July. But events prior to this decision suggest a shift in Musk’s attitude to content moderation. The self-declared ‘free speech absolutist’ drew criticism from the EU’s Commissioner for the Internal Market, Thierry Breton, after Musk promised to scrap Twitter’s current moderation rules in favour of greater free expression.

America’s proponents of a libertarian internet could have been hoping that the UK’s Bill and similar legislation tabled in the EU (the Digital Services Act) would meet its match in Musk as a figure of resistance. But in a meeting between the two in May this year, Breton apparently persuaded Musk that more transparent algorithms and clear content rules were the way to go. ‘I agree with everything you said, really’, Musk said in a video post after the two met in Texas. ‘I think we’re very much on the same line.’

This seems to represent a decisive shift that recognises both the uniqueness of internet content and the need to moderate its worst excesses to protect users. Meanwhile, the Cyberspace Administration of China announced in June that it plans to tighten up the way internet platforms, apps and websites moderate content. There are already strictures in place defining the content that companies can publish.

The proposed new rules target the comments sections on websites and social apps, which have been an effective backwater for censorship attention. Unlike the UK proposals, the Chinese regulatory system currently depends on thousands of workers at companies such as ByteDance to actively review and potentially delete posts before the censors see them. As a result of the colossal effort this entails, China has a thriving ‘censorship-for-hire’ industry. That level of scrutiny could now be extended to the millions of comments that appear every day in sites, chat rooms and comment boards. Some commentators worry that such a ‘review first, publish later’ approach could effectively end freedom of speech in one of the few safe havens of open Chinese political discussion.

Arthur Piper is a freelance journalist. He can be contacted at arthurpiper@mac.com

Image credit: Tech Reg2/AdobeStock.com