The evolution of surveillance technology

Isabelle Walker, IBA Junior Content EditorWednesday 23 July 2025

Artificial intelligence is expanding what surveillance technology can achieve, increasing the threat it poses to human rights. The swift implementation of effective regulation is essential.

AI is transforming the security and surveillance sector. Rapid developments in areas such as live facial recognition technology (LFR) have resulted in the software being rolled out in many areas of public and private life, at relatively low cost and with the promise of data processing capabilities far beyond that of a human.

For example, Delhi’s police force is set to expand its use of facial recognition, moving from localised operations to deploying the technology citywide. In May, the Polish parliament took a similar step, authorising facial recognition and behavioural tracking tools for use by law enforcement and municipal authorities. Meanwhile, in the UK, the police believe the use of LFR cameras may become ‘commonplace’ in England and Wales, with the number of faces scanned having doubled to five million in the last year.

AI surveillance technology has also been used by businesses to alert retailers to the presence of individuals within their stores who have a history of shoplifting activity.

Helen Kimber is a senior data scientist at Genetec, a security solutions company working with both private and public entities. She says that common client requests include AI systems that can ‘scrub’ video – meaning to quickly navigate through it – to detect objects such as unattended bags. She says the aim is for the systems to reduce the amount of work for humans by conducting a ‘first pass’ over vast amounts of data to streamline security responses. However, Kimber stresses the need for there to be a ‘human in the loop’ to make decisions about how to act on data.

Transforming policing

The policing applications of the technology are potentially transformative. In terms of LFR specifically, cameras map an individual’s facial features and AI immediately compares the data to images on a watchlist, triggering an alert when a match is made.

Fraser Sampson, a former UK Biometrics and Surveillance Camera Commissioner, who is also the non-Executive Director of FaceWatch, a facial recognition retail security company, likens the method to ‘spearfishing’ and explains that bystanders not featured on the watchlist are simply not seen by the cameras.

In July, the Metropolitan Police said that more than 1,000 arrests had been made using LFR since January 2024, including individuals accused of violent crimes. However, wrongful arrests have raised concerns about accuracy and algorithmic bias. In a recent case in Detroit, an individual was wrongfully arrested for retail fraud after surveillance footage was matched to his driver’s licence photo.

Sampson also draws attention to the ‘backstage’ uses of AI for retrospective facial matching. This involves the scrubbing by AI facial recognition software of existing photos or videos taken on regular cameras. These systems are ‘much more widespread and probably much more intrusive – they just don’t get the publicity,’ he says.

In a context where the surveillance relationship between the state and the citizen is rapidly changing – and the police are increasingly utilising footage sent to them by members of the public who witness an incident as bystanders – concerns are being raised about whether civilians are aware of exactly how their footage will be used by police.

Police surveillance is no longer where they choose to put up their cameras, it’s what they choose to do with the images from everyone else’s camera

Fraser Sampson
Former UK Biometrics and Surveillance Camera Commissioner

‘Police surveillance is no longer where they choose to put up their cameras, it’s what they choose to do with the images from everyone else’s camera,’ says Sampson. And while the police have rules in place for pixelating out background images of people they’ve recorded, ‘there’s none of that mandated in terms of [members of the public] just sending in footage that they think is relevant and helpful,’ he adds.

Stock.Adobe.com/Hikvision video surveillance cameras on pole. Ruslan.

Hikvision video surveillance cameras on pole. Stock.Adobe.com/Ruslan

This highlights a key theme in this space – the public’s limited understanding of the nature and implications of AI surveillance technology. While polls conducted in the UK and EU show that the public is broadly in favour of the use of AI surveillance within law enforcement, other surveys point to a ‘knowledge gap’ in the public’s understanding of AI. And so, while the technology has raced ahead and society has hurried along behind it, Sampson doesn’t believe there’s been a ‘similar kind of expansion of understanding.’

Civil society has long played a key role in resisting any perceived overstepping by the state into the private lives of citizens. But there haven’t been any significant protests against AI surveillance technology. Some believe this may be because people are now used to sharing many aspects of their lives, for example through social media. Others point to the visual appearance of the new systems, which look very similar to older CCTV apparatus – to which the public is now desensitised.

And while the reduction in constant human oversight is one part of AI’s core appeal for some, it’s a cause for concern for others. Migrants’ Rights Network (MRN) has expressed alarm about a series of cameras placed along the southeast coast of England after finding them to be AI-powered surveillance towers produced by US defence company Anduril, capable of algorithmically identifying, detecting and tracking individuals or subjects considered to be of interest.

Through Freedom of Information requests and work by researcher Samuel Storey, MRN discovered that the cameras were part of a contract between Anduril and the UK government worth around £21m, set to end in June 2026. Julia Tinsley-Kent, Head of Policy and Communications at MRN, says that the increased surveillance and militarisation of borders pushes migrants and asylum seekers to take increasingly dangerous routes. Global Insight has contacted the UK Home Office and Anduril for comment but had not received a response at the time of going to press.

Favoured by autocracies

If these technologies are raising ethical and privacy-related concerns in democratic societies, such worries are exacerbated in autocratic regimes, or where democratic backsliding has taken place.

In Myanmar, the military junta has used AI surveillance technology to monitor citizens and target dissidents. Some of these technologies were purchased from Western companies, despite restrictions on their export and use. ‘The result has been an enhanced architecture for state violence, which the Tatmadaw [Myanmar’s military] has used to kill hundreds of protesters,’ says Federica D’Alessandra, Co-Chair of the IBA Rule of Law Forum.

In Hungary, a series of legislative measures passed quickly through the country’s parliament in March curtailed the right of assembly in support of LGBTQI+ rights and allowed for the application of LFR to detect and prosecute petty offences, for example attending a ‘Pride’ event.

Civil society groups argue that the legislation uses the technology in a manner that violates the EU’s AI Act. Under Article 5 of the Act, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is generally forbidden. ‘None of the limited exceptions in the Article 5(1)(h) would allow for the use of [LFR] to prosecute all petty offences,’ says Ádám Remport, a legal expert at the Hungarian Civil Liberties Union (HCLU), ‘let alone to detect individuals lawfully exercising their right to freedom of assembly.’ The European Commission is now investigating whether the Hungarian legislation complies with the EU AI Act.

The Hungarian Cabinet Office told Global Insight that the law doesn’t conflict with EU standards ‘as – for the time being – there is no legal possibility for automatic decision-making without human control in either administrative or other proceedings.’ They say the legislation was introduced to protect the rights of children, which, according to the Hungarian Fundamental Law, take precedence over all other fundamental rights. ‘Accordingly, the law on the right of assembly stipulates that in Hungary it is prohibited to hold assemblies that make pornographic material available to or depict sexuality for its own sake among persons under the age of 18,’ says the Cabinet Office.

Despite the ‘chilling effect’ associated with the use of LFR, Budapest held its largest ever Pride event in June. ‘This signals not only a breakdown of legal legitimacy, but also a strong public will to actively resist rights violations,’ says Szabolcs Hegyi, a senior expert in the Political Freedoms Program at HCLU. Indeed, Budapest police later officially declared that no proceedings would be launched against participants in the Pride event. The Hungarian Cabinet Office says that the police aren’t initiating proceedings as a result of what the Office claims are ‘contradictory statements’ made by the organisers and ‘uncertainty regarding the interpretation of the law caused by the involvement of local government.’

The difficulties of dual-use technology

It’s vital, then, to ensure AI surveillance technology is appropriately regulated to ensure its ethical use and address the privacy concerns of citizens. This is challenging in the current climate, however, as innovation in the AI field is often regarded as crucial to the economies of the future.

A central conflict regarding the use, development and regulation of AI surveillance technologies is that the software presents an example of a ‘dual-use’ technology – one with both civilian and military applications, and that can be used for legitimate as well as illicit purposes.

For example, China employs AI surveillance technologies domestically within ‘smart city’ developments in much the same way as many other countries. In Hangzhou, in northern China, a city-wide digital governance platform called City Brain is utilised to analyse data from different sources, including traffic cameras and public transport systems. This data allows decisions about the city’s functionality to be made in real-time.

But Beijing has also been accused of employing AI surveillance technology for other purposes. Most controversially, it’s alleged that AI surveillance technologies – including facial recognition and ‘emotion-detection’ software – are being used to track, monitor and persecute Uyghur Muslims in the Xinjiang region. The Chinese government has denied any human rights violations against the Uyghur people and did not respond to Global Insight’s requests for comment.

Kimber says that to prevent the misuse of their systems, Genetec – among other measures – doesn’t allow for searches of subjective or charged terms, such as ‘suspicious’. Kimber says she doesn’t believe ‘that term provides you with anything other than an insight into algorithmic bias.’

This is a key area of contention for the industry given the significant and concerning potential of uninhibited searching for vague attributes or protected characteristics. However, it’s not as simple as a neutral technology being applied distinctly to either ‘civilian’ or ‘military’ uses.

Research by the AI Now Institute has revealed a growing interest in commercial AI models for military purposes, including intelligence, surveillance and reconnaissance.

Heidy Khlaaf, Chief AI Scientist at the AI Now Institute, says that some companies pitching foundation model-based AI to the military are ‘proposing commercial models that already have been pre-trained on a set of commercial data. They are not talking about military-exclusive commercial models that have been purely trained on military data.’

The implications of this could include a situation in which an AI-powered military device finds a false positive, which is then acted upon and results in the loss of an innocent life. But some defence companies argue that using commercial models can offer an avenue for faster deployment and cost reduction.

Kimber says the balance between privacy and security is a difficult one. ‘We all want absolute security and absolute privacy at the same time, but they are, in many cases, mutually exclusive,’ she says. She adds that companies and third parties need to ask themselves what their values are and consider commitments made within their AI guidelines before they make decisions about their offerings.

Ronald Hawkins, Senior Director of Industry Relations at the Security Industry Association (SIA), says his organisation is currently working on a project focused on AI use cases within security to guide their members. The National Institute of Standards and Technology, meanwhile, has developed the AI Risk Management Framework, which is intended for voluntary use ‘to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.’

But while it’s important for security companies and associations to interrogate their practices, legally binding rules and regulations must also be created and enforced. At present, the regulatory landscape is sparse. The EU’s AI Act is hailed as the ‘first-ever legal framework on AI’. However, Mario Di Carlo, an officer on the IBA Communications Law Committee, believes it’s ‘widely insufficient to deal with AI surveillance’. The primary means of regulating AI surveillance remains the EU General Data Protection Regulation, explains Di Carlo.

Regulation versus innovation

What’s making matters more challenging in terms of the development of comprehensive regulation in this area is that the development of AI technology has emerged as a form of 21st century space race. Here, China and the US are the key participants.

In this contest, regulation is often positioned as the enemy of innovation, particularly in the US. In January, US President Donald Trump issued an executive order, ‘Removing Barriers to American Leadership in Artificial Intelligence’ (the ‘Removing Barriers EO’), which rescinds former President Joe Biden’s directive on the ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.

The Removing Barriers EO calls for federal departments and agencies to revise or rescind all policies, directives, regulations and other actions taken by the Biden administration that are ‘inconsistent’ with ‘enhanc[ing] America’s global AI dominance.’ Despite this, in July the US Senate rejected a proposed ten-year ban on states regulating AI models, signalling a desire to protect the public from the potential harms of AI.

Meanwhile, in China, data sharing between AI companies and the government may be giving Beijing the edge. Developing AI is data-intensive – and the Chinese state has access to vast amounts of data. Research conducted by Martin Beraja, an acting associate professor at the University of California, Berkeley, found that ‘the provision of government data to Chinese AI [companies] servicing the state contributed to their rise as global leaders in facial recognition technologies.’

One lesson here for liberal democracies, says Beraja, is that they perhaps ‘need to think better about how to balance privacy concerns when it comes to government data with the provision of that data to AI [companies], in particular’ if they want to win the AI race. Beraja adds that one area with significant potential is the healthcare sector, where AI could provide new diagnostic tools or treatments.

Exporting digital authoritarianism

In this context of heightened competition, D’Alessandra – who’s Deputy Director of the Oxford Institute for Ethics, Law and Armed Conflict at the Blavatnik School of Government – highlights that ‘ethical concerns surrounding the export of technologies to third-party states may be superseded by [being concerned about outdoing] a geopolitical adversary.’ Indeed, in recent years, campaigners have raised concerns about the export of AI surveillance equipment and software to states with poor human rights records and countries where the technology has subsequently gone on to be used for illicit purposes.

Ethical concerns surrounding the export of technologies to third-party states may be superseded by [being concerned about outdoing] a geopolitical adversary

Federica D’Alessandra
Co-Chair, IBA Rule of Law Forum

Beraja’s research shows, for example, that autocracies and weak democracies are more likely to import surveillance AI from China. This tendency is exacerbated where such regimes are experiencing periods of domestic unrest. ‘Surveillance AI supports autocratic regimes because it makes repression cheaper and easier, so toppling the regime becomes harder,’ he says. At the same time, the innovation and growth of industry lends the regime a financial market to tap into, creating a ‘virtuous feedback loop’ through which autocratic regimes achieve stability and legitimacy.

D’Alessandra highlights that the trend of exporting these technologies to certain regimes is far more complex than identifying China as the only culprit. ‘Myanmar’s devastating surveillance infrastructure includes technology purchased from US, European and Israeli companies,’ she says. ‘This is not an isolated story, as Western suppliers have helped to bolster the surveillance capacities of abusive governments for years.’

Beraja meanwhile draws attention to the nature of surveillance technology exports. Unlike in the traditional sense, where a product is purchased and then physically shipped over borders, Beraja explains that in the case of AI surveillance technology, ‘what’s really being contracted is a service.’ Although physical cameras may be involved, the primary function of the service is data processing. This has raised questions about where the data collected by AI surveillance systems is sent, how it’s processed and who has access to it. In the case of Chinese technology companies, national security concerns have been raised regarding potential data sharing with the country’s government.

Surveillance AI supports autocratic regimes because it makes repression cheaper and easier, so toppling the regime becomes harder

Martin Beraja
Acting Associate Professor, University of California, Berkeley

In particular, Hikvision – the world’s largest surveillance camera maker – has been scrutinised for its close ties to the Chinese state. In 2022 the US banned video surveillance equipment from several prominent Chinese brands, including Hikvision, citing ‘unacceptable risk to the national security of the US’. A leaked Pentagon document described Hikvision as ‘partnering with Chinese intelligence entities’. In February, Hikvision failed in an attempt to overturn the ban in court.

In 2022, UK government departments were banned from installing Hikvision cameras – alongside equipment by some other Chinese companies – on ‘sensitive sites’. Despite this, they are still permitted for use ‘in wider public settings’, such as by local authorities. Hikvision has denied presenting a national security threat to governments.

‘In most cases, our video security equipment is installed on closed networks,’ a representative of Hikvision told Global Insight. ‘These networks are operated and controlled by end-users and not by Hikvision.’ The representative added that Hikvision has ‘no visibility of access into end users’ video data’ and that it would be ‘impossible’ for Hikvision to share end user video data with any government or entity that requested it ‘because, again, Hikvision cannot access it.’

Striving for balance

To make meaningful in-roads into the issue of regulation, D’Alessandra says we need international cooperation – including between the US and China. ‘A growing policy strategy, favoured by the EU among others, is to impose trade controls on cybersurveillance items that would allow governments to monitor, extract and analyse data from private citizens,’ she says.

One model to look to – both in terms of international agreement and domestic legislative changes – could be the export control systems in the landmark 2014 UN Arms Trade Treaty, says D’Alessandra. This treaty contains specific provisions against the sale of weapons where they may be used in the commission of atrocities.

China’s position as the top exporter of AI surveillance technology is undeniably complicating things, however. ‘A world where China is the technological leader and is large enough to meet the demand of the rest of the world for these types of technologies,’ Beraja says, ‘is a very different world to one where liberal democracies could unilaterally decide to regulate a technology.’

And regulation is key not only to prevent the unethical use of AI surveillance, but to ensure that the public accepts the technology being utilised in legitimate cases. Sampson says that trust and confidence in the institutions, the technology and how it’s used by the police will be critical, but significant gaps in regulation have the potential to lead to problematic use cases. ‘And each one of those use cases that comes up potentially erodes the public’s trust and confidence,’ he adds.

But shying away from surveillance AI isn’t necessarily an option either. The technology has immense potential and significant implications for national and domestic security, the efficiency of workforces and the effectiveness of policing.

Sampson can imagine a point in the future where police are questioned on their choice to not use the technology in a circumstance where a crime has occurred. ‘The risk you’ve got on the policing side is how do you decide which images you can ignore safely? All the risk is in not looking,’ he says.

It’s clear that the use of AI surveillance technologies is already widespread. Moving forward what matters is our attitude to understanding the technologies and regulating them to ensure they’re used ethically so that the privacy of citizens is balanced with the capacity of the technology for enhancing security.

Isabelle Walker is the IBA Junior Content Editor and can be contacted at isabelle.walker@int-bar.org

Header image: Adobe Stock/Ruslan; Adobe Stock/Sandra; Adobe Stock/SL Graphic Shop