Dangerous new developments drive government action to counter fake news

The continued proliferation of disinformation online – often referred to as ‘fake news’ – has this summer led to increased scrutiny and action from governments.

In India, the dangerous consequences of disinformation were made clear by violent incidents connected to malicious rumours spread on social media. In one incident in the state of Maharashtra, allegations appearing via WhatsApp that child kidnappers were operating in the area were linked to the lynching of five men by a mob. Similar crimes were reported in Sri Lanka and Myanmar.

India’s government applied pressure on WhatsApp following the events, and WhatsApp took action in July to reduce the ease of forwarding messages on its service.

An Oxford University study, published on 6 September, made clear that disinformation online is not going away, especially around elections. The study claims that one in three news articles published online in the run-up to the Swedish parliamentary election on 9 September was ‘junk news,’ a term researchers defined as items that are ‘misleading, deceptive or incorrect information purporting to be real news.’

The UK Parliament’s Digital, Culture, Media and Sport Committee, which through its ‘Disinformation and “fake news’’’ inquiry has looked at subjects including election advertising and foreign influence in electoral campaigns, released its interim report on 29 July. The report’s recommendations reflect the increasing pressure governments are placing on tech firms to do more; the Committee specifically called for ‘clear legal liability’ for tech companies to deal with misleading and harmful material on their websites.

“The proliferation of sources without the proliferation of traditional editorial processes is part of the [disinformation] problem

Robert D. Balin
Chair of the IBA Media Law Committee

The report also suggested placing such firms in a new category of liability, as neither ‘publisher’ nor ‘platform’. ‘Social media companies have tended to adhere to the view that they are platforms or conduits rather than publishers. Arguably, if social media companies were to be firmly identified as publishers then existing laws would suffice,’ comments Julian Hamblin, Vice Chair of the IBA’s Internet Business Subcommittee.

Some countries have turned already to legislation. In France – which itself saw the spread of false information during the run-up to its presidential election in spring 2017 – President Emmanuel Macron’s government put forward two draft laws in March 2018, designed to counter misinformation in the three month period leading up to elections.

The draft laws would force social media platforms of a certain size to adopt transparent measures to target false information. These include platforms having to notify users about who is paying for promotions on the platform. If an individual or organization believes false information is being disseminated about them on purpose and on a large scale, they could seek removal of the information via court order.

The French Audiovisual Council would also gain new powers to suspend the activity of a radio or TV service that’s owned or influenced by a foreign state, if it believes the service is spreading false information.

‘In the proposals, misinformation can be understood as false information - excluding, of course, parody or satirical content - likely to alter the sincerity of the forthcoming poll, which has been disseminated online, both massively and artificially i.e., in particular, through sponsored or promoted content using “bots”’, explains Claire Bouchenard, partner at Osborne Clarke. Critics say the draft legislation risks suppressing legitimate information.

While the French National Assembly passed the draft laws, they were rejected by the Senate on 26 July. A joint-party committee will now try to find a compromise text.

In Germany, the country’s controversial ‘NetzDG’ law – designed to counter misinformation and hate speech online – has been in force since 1 January 2018. The NetzDG law makes platforms liable for – and forces them to remove – content found to be ‘manifestly unlawful’ under the hate speech/defamation provisions of the German Criminal Code. Platforms also need to publish reports on the results of the law. The law has drawn substantial criticism, with opponents concerned that it impinges on freedom of speech.

‘Facebook, Twitter and YouTube published their transparency reports a few weeks ago. Results mainly show that the platforms apply different deletion schemes and that there is a tendency to block more than is necessary,’ explains Dr. Martin Schirmbacher, Co-Chair of the IBA Technology Committee and partner at Härting Rechtsanwälte. The German Chancellor Angela Merkel said on her podcast on 2 February that her government would ‘evaluate’ the law and its consequences.

One way to counter concerns about the inappropriate deletion of content might be to provide a right for posts deleted without legal basis to later be recreated, believes Schirmbacher. Another option might be ‘make the platforms’ reasons for deleting content more transparent and enforce their cooperation with a mechanism of self-control, so that independent institutions would decide [whether to delete content],’ adds Schirmbacher.

In the UK, the DCMS Committee looked at taking existing rules and applying them to the digital era. Its report recommends that the rules given to Ofcom under the Communications Act 2003 for setting and enforcing standards for content on the TV and radio be used as a ‘basis for setting standards for online content.’

James Harper, Chair of the IBA Internet Business Subcommittee and Head of UK Legal at LexisNexis UK, believes there’s a benefit in using rules already well-tested in another field, but adds that ‘we would need to increase the ambit of their control and the regulations that they can enforce to ensure it captures the appropriate channels of communication. And that takes you back to the problem [of] how you define what is and isn’t allowed.’

Robert D. Balin, Chair of the IBA Media Law Committee and a partner at Davis Wright Tremaine, points to a changed media landscape – with the continuing demise of the traditional news outlets – as contributing to the disinformation phenomenon. ‘The proliferation of sources without the proliferation of traditional editorial processes is part of the problem,’ he notes.

A combination of approaches seems appropriate. ‘There’s a role for governments to play. There’s a role for social media platforms to play. Technologists, governments, media: all are now working together on this,’ says Balin.