Digital surveillance’s threat to human rights

Arthur Piper, IBA Technology Correspondent

The landscape for potential human rights abuses by digital surveillance is changing as the methods used grow ever more powerful. Global Insight considers this evolution and the need for regulatory frameworks to adapt.

It is perhaps fitting that a key business involved in the tracking and capture of criminals through covert digital surveillance operates under a cloud of public suspicion and criticism. NSO Group, headquartered in Herzliya, Israel, sells such technology exclusively to government agencies worldwide to counter terrorism and crime.

But critics claim its reach is longer and more sinister. Reportedly, the company’s software – a form of ‘trojan horse’ malware nicknamed ‘Pegasus’ – not only targets smartphones to harvest data, but can scrape an individual’s data from the servers of the world’s biggest technology companies, specifically Amazon, Apple, Facebook, Google and Microsoft. One newspaper said it had seen details of the hack in documents alleged to be from the company’s own product demonstrations.

Not so, NSO executives countered. ‘NSO’s products do not provide the type of collection capabilities and access to cloud applications, services, or infrastructure suggested in this article,’ the organisation has stated.

The mobile hacking program is highly sophisticated and effective – the biggest scalp claimed by NSO is a notorious Mexican drug baron known as ‘El Chapo’. NSO injected its proprietary malware into his phone, which allowed law enforcement agencies to follow his every move and arrest him. An article by journalist Ronen Bergman, featured on the NSO website, raises the question of whether Pegasus also allows government actors to carry out potential human rights abuses by, for instance, monitoring and disrupting the work of journalists, politicians and humanitarian agencies. NSO’s Chief Executive Officer Shalev Hulio admits that the potential for such abuse exists – and that his company has thrown the master switch three times on clients who have overstepped the ethical line.

In late October 2019, social media company WhatsApp launched a lawsuit against NSO Group in a California court, alleging that technology sold by NSO targeted the mobile phones of over 1,400 WhatsApp users over a 20-day period from late-April to mid-May 2019. The users, it claims, included human rights activists, lawyers, academics and journalists. WhatsApp alleges such attacks were in violation of US law. NSO’s statement disputes WhatsApp’s allegations ‘in the strongest possible terms’, adding that the company will ‘vigorously fight them’.

The landscape for potential human rights abuses is evidently changing. Only two years ago, the US pressure group Human Rights Watch was accusing the regime in Saudi Arabia of cracking down on what it considered to be inappropriate sexual relationships. ‘If individuals are engaging in such relationships online, judges and prosecutors utilise vague provisions of the country’s anti-cybercrime law that criminalise online activity impinging on “public order, religious values, public morals, and privacy”.’ That surveillance could have been achieved by government agents monitoring online forums and social media.

Social credit

Since then automated surveillance devices have become more prevalent and networked – making the practice potentially more subtle and ubiquitous. Mobile hacking, social engineering, network surveillance, facial recognition technologies, GPS tracking and many more techniques that are routinely used in concert to catch and prevent crime and terrorism can also be used against citizens.

China’s push to develop a nationwide Social Credit System has been represented as a dystopian nightmare of total surveillance. According to some media reports, by 2020, China’s system could be producing a single number – or score – for each citizen. This would be calculated by algorithms that compile people’s social media connections, buying histories, location data, facial recognition video footage and more. The resulting credit score could be used to allow or prevent people from having access to housing, jobs and transport.

Jeremy Daum, a senior fellow at Yale Law School’s Paul Tsai China Center in Beijing, who has studied the development of the system, says that in reality China has a patchwork of regional pilots and experimental projects that falls short of the hype. ‘Because China is often held up as the extreme of one end of a spectrum, I think that it moves the goalposts for the whole conversation,’ Daum has said. ‘Anything less invasive than our imagined version of social credit seems sort of acceptable, because at least we’re not as bad as China.’

Secrecy

Even without such nationwide and ubiquitous systems, automated abuses of human rights are widespread. The United Nations has condemned unlawful and arbitrary surveillance as an infringement of fundamental human rights – yet acknowledges the practice continues unabated, not least because it is shrouded in secrecy. David Kaye, the UN Special Rapporteur on freedom of opinion and expression, took the unusual step in June 2019 of calling for a moratorium on the global sale and transfer of such tools until legal policies and frameworks are put in place to make them transparent and subject to accountability through regulation.

Kaye’s report to the UN Human Rights Council, Surveillance and human rights, spells out the way that the market for such technologies among governments with well-known oppressive regimes has thrived amid weak controls on technology transfers. States are already obliged not to interfere with privacy or restrict freedom of expression, but, he continues, ‘it is not clear that States generally afford affirmative legal protections against targeted surveillance’.

The United Nations has condemned unlawful and arbitrary surveillance as an infringement of fundamental human rights – yet acknowledges the practice continues unabated

Kaye’s solution is an assault on the secrecy and poor regulatory frameworks that surround such practices via curtailments to private sector trade. It draws in part on existing precedents for working with technology companies. For example, the European Commission has highlighted the notion that all such programs and devices should embed ‘human rights by design’.

In the US, the non-binding provisions of the Wassenaar Arrangement promote transparency and responsibility for exporting conventional technologies. But the absence of remedies outside of litigation and the lack of effective enforcement mechanisms ‘raises serious questions about the likelihood of holding companies accountable for human rights violations’ without further measures, Kaye concludes.

States must immediately cease the export, sale and transfer of private sector surveillance tools until a proper human rights-compliant framework is in place, Kaye says. Similarly, those who buy such technology must ensure that proper laws and systems of regulation and redress exist to provide actual remedy for any abuses. Companies in the sector should not only take responsibilities for protecting human rights seriously, they should submit themselves to human rights audits and verification processes.

If it is an ambitious agenda, the proliferation of surveillance-by-design devices makes it an essential one. Lifting the veil on an industry dedicated to remaining out of sight will be difficult. Setting sights on the very companies that may be enabling those abuses is the first logical step.

Arthur Piper is a freelance journalist. He can be contacted at arthurpiper@mac.com