Already an IBA member? Sign in for a better website experience
The IBA’s response to the situation in Ukraine
Ensuring protection from potentially harmful content online – especially via social media platforms – remains a priority for governments and data regulators. Some countries and regions have been more proactive than others in this respect. But commentators suggest that even where legislation is ahead of the rest, it’s still patchy. Meanwhile, legislative attempts to correct this may not be flexible or robust enough to apply to how technology will operate and content will be delivered, moderated and removed in the future. Enforcing such legislation is also problematic.
Unfortunately, however, children and other vulnerable people are still highly susceptible to some of the material that continues to circulate. Sometimes that’s primarily because it’s so popular and, in some circumstances, relatable.
A paper published last year by the UK medical journal Comprehensive Psychiatry highlighted that children and teenagers watching TikTok videos, for example, are self-diagnosing conditions such as autism, Tourette’s syndrome and attention deficit hyperactivity disorder (ADHD), despite it taking years of professional experience to determine different mental health conditions. Researchers believe that part of the reason is genuine frustration with waiting times for a professional diagnosis. But another phenomenon is that children want to emulate similar physical ‘tics’ as they view these conditions as ‘consumer identities or character traits that make individuals sharper and more interesting than others around them’ – a result of the number of views, likes, followers and hashtags such videos generate. Videos on social media with hashtags such as #DID [dissociative identity disorder], #borderlinepersonalitydisorder and #bipolardisorder have received millions of views, the paper says.
TikTok says it takes seriously its responsibility to keep the platform a safe space and that it takes action against medical misinformation, in line with its own community guidelines. It’s also taking steps to enhance the digital literacy education of those who engage with its platform, especially young people, for instance through its #FactCheckYourFeed campaign.
Chair, IBA Media Law Committee
The EU aims to tackle online harms through its Digital Services Act (DSA), which came into force in November 2022 and took effect in February. To some, this legislation marks the biggest shake up of the rules governing online intermediary liability in 20 years. Online platforms accessible to minors must now set out details regarding content moderation measures and algorithmic decision-making, and implement appropriate and proportionate measures to ensure a high level of privacy, safety and security.
Breaches of the DSA lead to turnover-based fines. Companies – ostensibly, social media platforms – can be punished with a fine of up to six per cent of annual worldwide turnover, and users have a right to compensation for any damage or loss suffered as a result of a breach. The legislation also allows for enforcement both by national regulators and by the European Commission.
However, the DSA mainly takes a non-interventionist stance. For example, although social media companies will need to be more transparent and are subject to greater obligations and monitoring, there’s very little to force them to police content. Instead, the legislation tries to improve transparency around content hosting and moderation by giving users more rights to complain about content and a greater capability to appeal if complaints are not dealt with satisfactorily.
Other jurisdictions have either planned or already put in place laws regarding how online content should be monitored and moderated: India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, Singapore’s Protection from Online Falsehoods and Manipulation Act and the UK’s Online Safety Bill are all key examples.
Content regulation laws have also been passed in the US – though with a focus on prohibiting what lawmakers have termed censorship. Such legislation includes Florida’s Stop Social Media Censorship Act, which allows users to sue tech platforms for alleged censorship. Other US states are considering similar legislation to prevent tech companies from moderating or deleting content where, lawmakers allege, the platforms are doing so in line with their own political viewpoint.
Legal challenges have been raised against the Florida legislation and elsewhere on the grounds of preserving social media companies’ rights to freedom of speech as guaranteed under the First Amendment to the US Constitution. Tech groups have sought to preserve their right to regulate content where they believe it may lead to violence.
Dana Green, Chair of the IBA Media Law Committee and Senior Counsel at the New York Times, says ‘in the US, we do not generally distinguish between online harms and offline harms. Speech either is or is not lawful, regardless of the medium, and it’s no secret that it’s a high bar in the US before speech can be punished’. For example, the First Amendment protects hate speech, which is often criminalised in Europe and elsewhere, ‘because we distrust giving authorities the right to make these subjective decisions’, says Green.
Similarly, falsehoods are also constitutionally protected in the US, outside of narrow categories, such as defamation, certain commercial speech and criminal fraud. A large part of the wariness of criminalising free speech, says Green, developed through cases in middle of the twentieth century at a time when fascism and communist totalitarianism offered vivid examples of the dangers of state-regulated speech. ‘People may be socially ostracised or kicked off a particular social media platform for expressing prejudiced views’, says Green, ‘but putting them in jail purely for speech is really anathema to American law’.
Adriana de Buerba, Co-Chair of the IBA Criminal Law Committee and a partner at Spanish law firm Pérez-Llorca, says that in Spain, protection from online harms is regulated in the Spanish Criminal Code (SCC). Although there are no specific sections of the SCC punishing this type of conduct, nor a specific offence of ‘cybercrime’, the SCC protects against harms through a wide range of offences that can be committed not only online, but also through other channels. These include cases of online fraud; the distribution of illicit content though social media aimed at promoting suicide or self-harm among minors; and the online dissemination of content aimed at facilitating the consumption of products that pose a health risk to minors.
However, while Spain may have criminal legislation in place that’s aligned with EU standards, prosecution records are low due to a lack of police resources, the complexity of prosecuting complaints successfully and an inability to prevent harms from taking place in the first place – three factors that have thereby contributed to less people making formal complaints, says De Buerba. Beyond social media platforms, other companies also need to be aware that they can be prosecuted under the legislation ‘as this type of behaviour can potentially occur in any business environment’, De Buerba adds.
Image credit: Monkey Business/AdobeStock.com