Already an IBA member? Sign in for a better website experience

Let’s face it: challenges of face recognition technology in Latin America

Back to Latin American Regional Forum publications



Latin American Regional Forum Scholarship


Maria Jose Badillo Juarez,

Public lawyer specialising in regulation and antitrust, Mexico


‘Ciudad Segura’ reads above the cameras in almost every lamppost in Mexico City. Similar phrases are used in big cities of Argentina, Ecuador and Chile, where cameras with facial recognition technology (FRT) have been installed for surveillance purposes.[1]Many of these governments have justified the introduction of this technology due to the security crises in their countries. The Latin American region faces many security issues, particularly high rates of violence mainly caused by organized crime and drug cartels. In fact, the Americas represent only 13 per cent of the world’s population but 37 per cent of its murders:[2]organised and interpersonal crime kills more people in Brazil, Colombia and Mexico than in countries with internal conflicts or war zones.[3]The region also reports the highest levels on physical assaults and violent robberies,[4]with 36 per cent of the population having claimed to be a victim of a crime.[5]


The violent situation has led to a general sense of insecurity in most Latin American cities, and governments are responding with security strategies at a local and national level. But in a techno-solutionist era, is important to face the challenges in technology implementation and the possible impact to prevent creating yet more problems. In this context, it is important to examine the principal challenges that governments need to handle when using FRT for security purposes. Latin American governments will face data protection issues and the potential for error related to FRT operation and its use for enforcement purposes.


Facial recognition technology

Facial recognition technology is a system that allows a machine to identify a person the same way a human would – by looking at the facial features, including:


  • the distance between the eyes;
  • width of the nose;
  • depth of the eye sockets; and
  • shape of cheekbones.


The features are used to create a faceprint with approximately 80 nodal points that serve as a unique template for future identification.[6]As with many biometric technologies, FRT was envisioned to improve security in multiple contexts – finding missing persons, securing transactions, validating identities at airports and even unlocking your phone.[7]


Most of these scenarios are for verification purposes, which is different from recognition. In short, verification means the algorithm works to confirm a claimed identity, while recognition means a ‘captured face’ that makes a match with another picture of the same face.[8]Recognition software needs to be supplied with a list of faces we want to recognise. The ‘watch list’ contains photos of criminal offenders[9]and the idea is to look for an ‘interesting person’ in the crowd. For example, if the police are looking for an offender, it needs to surveil multiple spaces in order to have more chances of getting a match (and a possible catch).

For a match to occur, the software must go through a series of steps. First, the system will detect a video image, align it and measure it to create a template.[10]Second, the template is translated into a code representing the exact features of the face, which is compared to a existing faceprint from the watch list.[11]Finally, the code of the captured face will be compared with the codes of existing faceprints, which can lead to one or more matches.


Facial recognition technology needs a database of face prints that the ‘captured faces’ can be matched against. Governments have an advantage in the construction and installation of massive databases due to the general acceptance of facial identification norms and practices.[12]Indeed, many bureaucratic procedures process biometric data for official identification purpose, like obtaining a driver licence or qualifying for social assistance.


Government agencies usually share their databases because it can lead to greater accuracy in the results. However, data aggregation needs to be done by the rules and agencies should be legally enabled to do so. For example, millions of dollars were spent on the installation of biometric cameras inside the Colombia’s Transmilenio subway, only for it to be realised that the system could not function without previous and existing data.[13]In that case, the main problem was the lack of a database but also the lack of competence of public officials to create one.[14]


Furthermore, while FRT works with existing data but also collects new data. The quality of facial recognition depends on the amount of information input: it is considered that ‘high quality’ databases have more than one picture of a person's face.[15]Even when a database can be robust, it will need to be updated at some point because recognition performance decreases with time.[16]Public officials need to constantly gather and storage data from their video surveillance cameras or internally shared networks.


Facial recognition technology systems constantly store video information. The proliferation of cheap cameras and fast video processing computers[17]allows for the detection of more faces and recognition of more people. The only technical limitation for archiving every face is capacity.[18]We can assume government agencies have enough resources to obtain large capacity. Indeed, Latin American cities are already investing in the implementation of face recognition systems that can gather more data at a cheaper cost.[19]


Even when FRT focuses on looking for ‘wanted persons’ on a watch list, the possibility of data mining could bring potential for abuse. The massive collection and capacity for storage of biometric data allows authorities to look back to where a citizen has been in the past.[20]This implies that FRT could not only be considered as crime prevention tool, but rather a mass surveillance tool where every citizen becomes a possible suspect and their past actions could be specifically tracked down for future incriminations.


Although FRT can be a helpful tool for public security purposes, governments need to face challenges related with its implementation. First, recognition software depends fundamentally on biometric data processing. This data is considered sensitive because it gives unique information that reveals ‘physiological and behavioral characteristics of individuals’.[21]Strict regulations like the General Data Protection Regulation give special protection and forbid the processing of biometric data ‘for the purpose of uniquely identifying a natural person’.[22]


The collection, use and storage of biometric data through video devices requires, at least, a legitimate interest and necessity.[23]In general terms, public security constitutes a legitimate interest for video surveillance. However, agencies need to make sure they are not processing more data than they need to reach their objectives.[24]Also, high data processing standards require fairness and transparency about surveillance activity so citizens can be informed about when, why and how they are being monitored. For instance, enforcement agencies should advise of the purpose of the surveillance – ‘for your safety’ announcements are not enough.[25]


The installation of cameras in public spaces with face recognition software brings a more pervasive and automated form of surveillance as well as identification-at-a-distance.[26]Unlike other biometric technologies, FRT was designed for effective, accurate and real-time results.[27]Even when it relies on machine learning, FRT needs human intervention. A major challenge with FRT are the false positives and false negatives error rates. In the ‘most wanted’ scenario, a false positive error means the system matches the face of an innocent person with the face of an offender, while the false negative means the face of the offender is not matched with its own.


The question is whether the algorithms should be configured to cast more false positives or false negatives.[28]This leads to a dilemma between a ‘pro-innocence’ or ‘pro-guilt’ approach in FRT systems. The decision for either having more false negatives implies more people detained and investigated, but improves the possibility of catching the right person. In a security programme, governments will likely choose the ‘pro-guilt’ approach because they are looking for criminals. Although it might seem invasive for innocent citizens, choosing a lower false acceptance rate could also make the technology useless. Thus, it is mandatory to have trained personnel able to deal with the errors to avoid false alarms and unjust accusations.


Even so, there is a belief that machine-based systems could be more privacy friendly, because they are ‘always fair and can 100 per cent follow the predefined rules’.[29]Governments should privilege a software that assures privacy by design because these systems only collect and process biometric data for specific purposes, whereas the source of the data also complies with existing normative frameworks.[30]More importantly, privacy by design in FRT has procedures ‘in order to respect the dignity of people who could have been wrongly identified and to avoid transferring onto them the burden of system faults’.[31]


On the other hand, governments need to face the challenges of using FRT as part of law enforcement strategies. Some of the principal concerns are related with the necessity to regulate FRT results as trial evidence. For instance, governments should regulate matters regarding the use of FRT results as evidence of identification or establishing probable cause of an offence.[32]In contrast, citizens should have the right to question the validity of that evidence, considering that FRT is unreliable and that algorithms could even be biased on gender and race.[33]Finally, judges will need to decide whether the traditional standards of review should apply to FRT evidence-based accusations or if they should be scrutinised with traditional ones.



Even when FRT was conceived as a non-invasive technology, it was strongly questioned for its potential for abuse. What makes face recognition so controversial is the processing of biometric data in such a ‘passive way’, where individuals are not aware they are constantly being identified. In general, it seems like Latin American governments will face challenges in the implementation of FRT, especially related to data protection, technology reliability and the use of FRT results for enforcement purposes.


Given the high level of violence in the region, FRT is envisioned as a formidable solution. Nonetheless, the lack of strong normative frameworks regarding both surveillance and data protection raises concerns about its potential for abuse. As a matter of fact, the Latin American region is usually below adequate standards of data protection and privacy laws.[34]If there are no limits for the collection and storage of biometric data by enforcement agencies, security policies can be just an excuse for a police state. FRT implications on privacy and other fundamental rights such as freedom of expression and association are the main reason why FRT is already banned in San Francisco and Oakland.


As with other technologies, FRT is not bad by itself. While there are still many issues related to FRT implementation, and clear dilemmas between security and privacy, the technology could help to fight the security crisis. Governments need to create or reform surveillance and data protection frameworks in order to grant broader protection to citizens. Individuals should have the right to know, access and request the deletion of their biometric data under clear rules. This is only achieved by a human-centred regulation of FRT where the highest standards of data protection are assured.


[2]UNODC, Global Study on Homicide, (Vienna: United Nations, 2019), 13.

[3]Robert Muggah and Laura Bailey, ‘We can halve most forms of violence by 2030. Here's how’, World Economic Forum,

[4]Robert Muggah and Katherine Aguirre, Citizen security in Latin America: Facts and Figures, (Igarapé Institute: 2018), 9.

[5]Ibidem, 9.

[6]Kevin Bonsor and Ryan Johnson, ‘How Facial Recognition Systems Work’,

[7]Jesse West, ‘21 Amazing Uses For Face Recognition – Facial Recognition Use Cases’,, accessed 30 September 2020

[8]Kevin Bowyer, ‘Face Recognition Technology: Security versus Privacy’,IEEE Technology and Society Magazine (2004), 12.

[9]Ibid, 10.

[10]Kevin Bonsor and Ryan Johnson, ‘How Facial Recognition Systems Work’,, accessed 30 September 2020


[12]Kelly Gates, Our Biometric Future, Facial Recognition Technology and the Culture of Surveillance (New York: New York University Press, 2011),46-47.

[13]Pilar Sáenz and Ann Spanger, Cámaras Indiscretas: Análisis del fallido sistema de videovigilancia inteligente para Transmilenio (Colombia: Fundación Karisma, 2018), 4. 

[14]Carolina Botero. ‘Cámaras biométricas en Transmilenio, casi recreamos un “1984”’, El Espectador. Available at:, accessed 30 September 2020

[15]Mou Dengpan, Machine-based Intelligent Face Recognition, (Beijing: Higher Education Press, 2010), 46.

[16]Ibid, 50-51.


[18]Kevin Bowyer, ‘Face Recognition Technology: Security versus Privacy’,IEEE Technology and Society Magazine (2004), 15. Available at:

[19]Chris Burt, ‘VSBLTY to provide facial biometrics for Smart City partnership in Latin America’. Available at:, accessed 30 September 2020

[20]Kevin Bowyer, ‘Face Recognition Technology: Security versus Privacy’,IEEE Technology and Society Magazine (2004), 15. Available at:

[21]Privacy International, ‘Biometrics’, Available at:, accessed 30 September 2020

[22]General Data Protection Regulation, chapter 2, article 9, 1. Available at:

[23]European Data Protection Board, ‘Guidelines 3/2019 on processing of personal data through video devices’, 8. Available at:

[24]Ibid, 8.

[25]Ibid, 7.

[26]Kelly Gates, Our Biometric Future, Facial Recognition Technology and the Culture of Surveillance (New York: New York University Press, 2011),27.


[28]Mou Dengpan, Machine-based Intelligent Face Recognition, (Beijing: Higher Education Press, 2010), 45.

[29]Ibid, 2.

[30]  J. Pedraza, et al, ‘Privacy-by-design rules in face recognition system’.Neurocomputing (Madrid: Elsevier, 2013), 51.


[32]Kristine Hamann and Rachel Smith, ‘Facial Recognition Technology: Where Will It Take Us?’, American Bar Association Organization. Available at: Accessed 30 September 2020.

[33]Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, (Conference on Fairness, Accountability, and Transparency, 2018), 3. Available at:

[34]DLA PIPER, ‘Data protection laws of the world’. Available at:

Back to Latin American Regional Forum publications