Online regulation marks global shift towards collective ‘duty of care’

Ruth Green, IBA Multimedia JournalistWednesday 16 June 2021

An exponential rise in online abuse and harassment has necessitated governments to step up online regulation. However, proposals to introduce sweeping laws that empower regulators to force internet companies to remove content deemed ‘lawful but harmful’ raise serious concerns for privacy, free speech and data security.

On 12 May, the United Kingdom announced it was edging closer towards creating an online safety bill, which imposes a ‘duty of care’ on digital service providers to moderate user-generated content to prevent users from being exposed to illegal or harmful online content. The bill has been welcomed by children’s safety campaigners. However, civil liberties groups argue the proposals could lead to subjective censorship that undermines fundamental democratic values.

Self-regulation, which was the mainstay of cable television and print media, is now being brought to regulate online content

Sajai Singh
Partner, J. Sagar Associates

‘It’s probably beneficial to everybody, including tech companies, for there to be some sort of regulatory oversight,’ says Alex Keenlyside, a litigator specialised in contentious media and data protection matters at Bristows in London. However, he says the bill lacks clarity in its current form. ‘Are [companies] being asked to take active steps to monitor content? And if they are, that's fundamentally at odds with the e-Commerce Directive, which imposes a general prohibition on monitoring.’

Australia, Canada and Ireland are just a handful of other jurisdictions where fierce debate continues over the underlying intention behind and consequences of government efforts to moderate online activity. No doubt legislators in these countries and the UK will be watching carefully how India has been navigating the challenges of introducing its own online regulatory framework.

The new rules, which fall under the country’s Information Technology Act, were introduced at the end of February. They include the regulation of user-generated content on social media platforms such as Facebook and Twitter, digital news media and over-the-top (OTT) services including Amazon Prime and Netflix. They also contain new traceability criteria, which requires end-to-end encrypted messaging platforms like Signal and WhatsApp to hand over private information about their users.

Sajai Singh, a partner at J. Sagar Associates in Bangalore, says the legislation has created significant challenges for technology companies and intermediaries operating in the country. ‘The obvious concern comes from the fact that social media companies, streaming platforms and OTT platforms – which were until now operating freely – are now under some level of direct government supervision,’ says Singh, who is also the Chair of the IBA Technology Committee. ‘Self-regulation, which was the mainstay of cable television and print media, is now being brought to regulate online content.’

The legislation has already escalated tensions in the country’s tech sector. At the end of May, WhatsApp, which is owned by Facebook, filed a lawsuit at the High Court of Delhi. The company alleges that the new traceability requirement threatens to undermine the privacy of its user base in India, which is home to the app’s highest number of monthly active users worldwide.

In the same week, Twitter raised concerns about the safety of its employees following a police raid of its Delhi office after the company labelled a tweet by a government spokesperson as ‘manipulated media’. In early February, the platform suspended ‘more than 500 accounts that were engaging in clear examples of platform manipulation and spam.’ This followed several blocking orders issued by the government calling on Twitter to remove content, including posts that criticised controversial new agricultural laws and the government’s response to the pandemic.

There’s still much uncertainty around compliance with India’s new guidelines, including the requirement for companies to appoint a resident Indian citizen as a chief compliance officer who is responsible for taking down flagged content. Twitter has already requested a three-month extension to allow more time to adjust to the new rules.

In the UK, media regulator Ofcom will be responsible for enforcing the new content policies and is empowered to issue unprecedented fines of up to £18m or as much as ten per cent of a company’s global turnover. These exceed the penalties set out under the General Data Protection Regulation (GDPR), which sets out a maximum fine of £17.5m or four per cent of annual global turnover.

Keenlyside fears companies will feel compelled to remove content to avoid facing such hefty fines. ‘These companies place great emphasis on freedom of speech and transparency, but the risk is that they’ll take down more lawful content now because of the financial risk, the potential liability or the reputational headache,’ he says. ‘And they don't want the government to be against them.’

For many smaller tech companies, the regulations will add yet another unwelcome layer of compliance, he says. ‘The risk is to the smaller companies and an even greater risk is to those that don't yet exist, the start-ups who don’t yet have financial stability. I imagine they will now have to invest quite substantial sums in compliance, so that will stifle innovation.’

However, Anurag Bana, a Senior Project Lawyer in the IBA Legal Policy and Research Unit (LPRU), says the legislation could be a wake-up call for the UK tech sector. ‘On one hand, you want to encourage innovation and growth and there’s a lot of investment in innovation hubs,’ he says. ‘At the same time, it doesn't help if those hubs are creating so many new initiatives and not taking any responsibility, because then it will have a ripple effect everywhere.’

Bana says the time is ripe for legislators to work with tech companies and civil society to introduce measures that are proportionate and uphold human rights. ‘It’s a collective responsibility where any kind of content which affects mental health and well-being, particularly of young children and adolescents, is in question,’ he says. ‘Technology companies now have to listen and take ownership in terms of how they’re dealing with data. It is not only the duty of care – it’s the filter of care. I hope the final bill conveys that message in some way because I think that’s key. This is the time for these discussions before it’s too late, not only for the industry, but for society as well.’

For legislation of this magnitude, it’s only right that the bill is subject to months of close parliamentary scrutiny, says Keenlyside: ‘We’re trying to do the impossible here, which is to create a regime in one fell swoop that addresses these terribly complex issues and all the tensions that lie within those issues. The laws around defamation in particular, but also around privacy, have been carefully calibrated and incrementally improved or added to over decades – that’s why they work.’

Download the IBA Global Insight app

Access expert analysis on international rule of law, business and human rights