European Commission tackles disinformation

Arthur Piper, IBA Technology Correspondent

With online disinformation increasingly damaging democracy, Global Insight assesses the European Commission‘s action plan to address the issue, which has already signed up Facebook, Google and Twitter.

The European Union’s parliamentary elections in late May were a testing time for the main parties. Populist movements are on the rise and a range of mischievous actors – from pranksters and data brokers to foreign government agencies – are able to take fake news and disinformation campaigns to new levels.

While there’s little it can do immediately about the rise of populism, the European Commission launched its Action Plan against Disinformation at the end of last year.

The initiative is an acknowledgement of growing public concern about allegedly systematic attempts to undermine the outcome of electoral campaigns, especially through social media. The Commission says 73 per cent of European internet users expressed concern about disinformation and misinformation in the run-up to local, national and European elections.

Both the 2016 US Presidential elections and the UK’s referendum on EU membership have been the subject of investigations into the influence of illicit online activity in the run-up to those votes. Early indications in the UK at least point to voter manipulation via social media platforms, with signs that Russian sources were involved.

A report by the UK Information Commissioner’s Office (ICO) in July 2018, for instance, shed light on the murky world of unlawful political advertising. In one instance, campaigners were able to use data harvested from up to 87 million global Facebook members to target them with highly tailored political messages based on their past online behaviour. In doing so, they violated personal privacy regulations. They may have also altered the outcome of a democratic process.

‘The invisible, “behind the scenes” use of personal data to target political messages to individuals must be transparent and lawful if we are to preserve the integrity of our election process,’ concluded the UK Information Commissioner, Elizabeth Denham.

News of fake news

Some countries have legislated against the spread of fake news and disinformation. Qatar’s 2014 cybercrime laws, for example, make it illegal to spread false news that jeopardises the safety of the state – but the definition of what constitutes fake news is vague and potentially open to state abuse.

The European Commission has responded with its Action Plan. This is because – as things stand – disinformation in Europe is not automatically classed as unlawful content, unlike, for instance, hate speech, inciting terrorism or child pornography. That means any measures need to be balanced against such legal rights as those to freedom of speech. To that end, the first two of the Action Plan’s four ‘pillars’ focus on improving the ability to detect and respond to disinformation through various government agencies both inside and outside Europe. The fourth pillar aims at raising public awareness of disinformation and ‘improving societal resilience’. Those measures are essentially reactive (the third aims at mobilising the private sector to tackle the issue).

Choking off sources of disinformation entails the more problematic job of convincing refractory social media behemoths – Facebook, Google and Twitter – to sign up to and act on a code of conduct. All three have agreed to follow the EU’s Code of Practice on Disinformation.

In it, disinformation is defined as ‘verifiably false or misleading information’, which, cumulatively, ‘is created, presented and disseminated for economic gain or to intentionally deceive the public’. Not only that, it must also be capable of causing ‘public harm’ intended as ‘threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security’.

The definition is quite wide and is clearly not intended to provide a binding, legal clarification. The Code requires its signatories to acknowledge there’s a problem and that they’re helping to formulate a response using the technologies on their own platforms. In detail, it asks for monthly progress reports on the way that those companies scrutinise ad placement, ensure transparency over political advertising and act on the closure of fake accounts, as well as how automated bots are managed. In other words, the European Commission is attempting to extend its reach into the previously opaque world of social surveillance technologies.

The move has been welcomed by some legal experts. Pedro Peña, an Attorney of the Spanish Parliament and an experienced internet lawyer and pundit, has said that receiving open acknowledgement that there is a problem and putting in place a pluralistic approach among Member States and private business to deal with it is a major step forward in the politics of the internet. While technology platforms have traditionally distanced themselves from the content of their platforms, their agreement to take some responsibility for monitoring and taking down (dis)information from their sites is a further move away from that position. But how ready are they to take on such a role in practice?

The Code requires its signatories to acknowledge that there is a problem and that they are helping to formulate a response using the technologies on their own platforms

The answer is: not quite. In February, the Commission published the first of its progress reports and found the previous month’s performance of all three enterprises lacking. Information on each of the key metrics was either non-existent or too vague to be of use. ‘[T]he platforms have failed to identify specific benchmarks that would enable the tracking and measurement of progress in the EU,’ the document concluded.

By March, the Commission noted the situation had improved. Google said it would introduce its EU Elections Ads Transparency Report in April; Facebook that it would launch its Ad Library in late March to provide a searchable database for political issue-based ads; and Twitter that it would extend its political campaigning ads policy to cover the EU, with relevant ads viewable in its Ad Transparency Center.

The intention was that, for the European elections, the tracking algorithms beneath such platforms should be working. This is not just to predict and modify human behaviour in the service of commerce, but to identify and neutralise some of the potentially harmful messages that flow through those systems. If this proves successful, it could provide a working template for cooperation between sovereign states and global technology businesses in a range of areas. The outcome and future of this initiative is likely to be far more important than the latest round of results of European elections.

Arthur Piper is a freelance journalist. He can be contacted at arthurpiper@mac.com