Time to tackle internet anonymity

Arthur Piper, IBA Technology Correspondent

Tech giants are under fire for facilitating all sorts of unwanted activity online. But finding more effective solutions than self-regulation is far from straightforward.

The government of the United Kingdom is planning to crack down on nasty and illegal behaviour on the internet – especially online bullying, trolling, fake news and hate speech. To that end, in May, the Department for Digital, Culture, Media and Sport launched a set of online safety proposals containing a draft code of conduct for social media companies.

The document – a response to the government’s own Internet Safety Strategy green paper – bemoaned the fact that users’ experiences of how well large social media companies responded to abusive behaviour was out of kilter with the way the tech businesses said they behaved. For example, while many companies had stringent policies on taking down abusive material, in practice, they often simply did not act in a timely manner to remove such content.

The message from the UK government seems to be that self-regulation has not kept pace with the potential risks users face when using social media platforms. Recommendations from the Department in July went further, suggesting that the businesses of Facebook, Twitter and Instagram be re-classified to sit somewhere between online platform and publisher. Those companies have long held that they have no responsibility – and therefore no liability – for the often harmful and hateful content that users post.

They do, however, acknowledge a problem with fake accounts. Twitter, for example, reportedly removed up to 70 million fake accounts from its platform in May and June this year, according to The Washington Post. Many such accounts are auto-bots that use details from legitimate users. They post on a wide range of issues from politics to pornography and are widely credited with degrading the quality of user experience.

The UK Minister for Digital and the Creative Industries, Margot James has been reported as saying that the freedom to be anonymous had been substantially abused. Though not widely reported, the comment was perhaps intended to act as a threat for future potential action when the government releases its white paper on the subject – expected before the end of 2018.

It’s a view that has significant support. Ben Wallace, Minister of State for Security and Economic Crime, for example, has said that a major factor in bullying and grooming is anonymity and that it is time to introduce a digital ID for every internet user to combat the issue. That proposal would depend on an identity check similar in nature to the one that is planned for accessing age-related online content such as pornography. Users may need to take a trip to the post office with a passport in order to obtain online credentials.

Attempts by lawmakers to end anonymity online is not new, as Martin Schirmbacher, Co-Chair of the IBA’s Technology Law Committee, points out. But, so far, they have not been particularly successful – not least because of the massive hurdles they would need to clear. For example, Europe’s General Data Protection Regulation, which went live in May this year, has opted to support the right of individuals to withhold their names online. ‘The GDPR has just renewed the principle of data minimisation as one of the few principles of data processing,’ Schirmbacher says. That gives people the right to post anonymously to prevent their data being processed in ways that they dislike.

The courts in Europe are also supporting the principle of anonymity. The Landgericht Berlin, a local German court, held that a clause in Facebook’s user terms broke the law because people had to agree to use their real names when signing in on the platform – despite the fact that they had no clear idea of how those details would be used. Pre-set privacy requirements could have enabled people to see where Facebook users were located during live-chat sessions, for example. In addition, the German Telemedia Act maintains that it must be possible to use any telemedia service as long as it is technically possible and feasible for the provider.

Schirmbacher says that even if such rules came into effect, online ID measures are hard to implement in practice – and could be prohibitively expensive. ‘Not only banks but many services try to identify their users mainly to be able to claim damages or compensation if needed,’ he says. ‘The use of ID mechanisms might work for expensive services but would most likely lead to the non-use of blogging or commenting services if they had to be applied prior to the use of the service.’


The GDPR gives people the right to post anonymously to prevent their data being processed in ways that they dislike


Large tech companies have based their business models on providing as much frictionless access to services as possible – in order to make money from crunching the data they collate from users to sell targeted advertising and services back to them. Fewer people might relish the idea of paying for a service that is already making money out of their online activities.

If the UK white paper does take the route of removing anonymity, there could be problems with the way the government has chosen to define harm. The current proposals say that the government wants to make whatever is considered unacceptable offline unacceptable online. That is a reasonable goal, but it fails to restrict the definition of harm to illegal behaviour – something that would have a foundation in existing law. Instead, it also talks about harmful, but legal behaviour.

Ben Bradley, Senior Policy Manager at UK technology industry body techUK, says: ‘One reason industry action on child abuse imagery has been so effective is not just because of the moral and legal obligations on companies, but because of its very clear, black-and-white nature. This is not the same with harmful but non-illegal content, where private companies must make difficult decisions that may encroach on freedom of expression.’  

Bradley says the government ought to provide clarification on grey areas of the law and legal definitions for terms such as harmful content and offensive communications. Without such clarity, companies would need to set their own standards of behaviour – the very process the government has so far found wanting.

Arthur Piper is a freelance journalist. He can be contacted at arthur@sdw.co.uk