The EU’s draft regulation on artificial intelligence (part 1)

Friday 25 June 2021

Katerina Yordanova,

KU Leuven Centre for IT and IP Law, Leuven

katerina.yordanova@kuleuven.be

Introduction

The European Union’s ambition to regulate artificial intelligence (AI) is hardly surprising. Perhaps the first significant action in that direction was the establishment of the High-Level Expert Group on AI (HLEG) in 2018 which paved the way for the President of the European Commission, Ursula von der Leyen, to declare the planned adoption of an AI legal instrument as a top priority in her policy agenda. In February 2020, the Commission published the White Paper on Artificial Intelligence, presenting different policy options and which, after public consultation, resulted in the first draft of the Regulation Laying Down Harmonised Rules on Artificial Intelligence (the ‘AI Act’).

The draft AI Act

The legal basis of the newly proposed regulation is Article 114 of the Treaty on Functioning of the European Union.1 As such, the AI Act pursues four specific objectives:

  1. to ensure that AI systems available on the EU market are safe and respect fundamental rights and Union values;2
  2. to safeguard legal certainty;
  3. to enhance governance and effective enforcement of the existing legislation regarding AI systems; and
  4. to facilitate the development of a single market for lawful, safe and trustworthy AI, which shall help avoid market fragmentation.

Against these objectives, the bulky regulation lays down rules on ‘placing on the market, putting into service and the use of AI systems in the Union’. It attempts to define and classify AI systems via a risk-based approach and subsequently regulates them along a spectrum, going as far as prohibiting certain AI practices. The personal scope of the Act is quite broad, including:

  • ‘providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country’;
  • users of AI systems within the Union; and
  • ‘providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union’.

This extremely wide scope and broad extraterritorial effect somewhat resembles the approach adopted by the General Data Protection Regulation, which proved to be extremely problematic for companies established in third countries, as the recent ruling in Schrems II showed.3 To make matters even more complicated, ‘provider’ may constitute a: ‘natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge’. This definition is, in practice, problematic because its scope is so large that it encompasses big tech companies such as Microsoft and, at the same time, individual Free and Open-Source Software (FOSS) developers. It is not clear if, in such context, uploading the software to GitHub would constitute 'placing it on the market' or 'putting it into service'.

The material scope of the AI Act is limited, for example, by certain regimes that exist in other EU acts, such as Regulation (EC) 300/2008 on common rules in the field of civil aviation security, or by AI systems developed or used exclusively for military purposes. This, however, encompasses a rather small number of cases, considering the broad scope of the definition of an AI system provided by the Act. Article 3 (1) identifies an AI system as ‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with’. Annex 1 contains a rather confusing list of techniques that is supposed to make the regulation future-proof.

While the definition provided by the Act is certainly an improvement compared to the one given by the HLEG in the Ethics Guidelines for Trustworthy AI, there are still some questionable issues in the regulation around defining technical concepts. A striking example is the attempted definition of an emotion recognition system, which is deemed to be an ‘AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data'. One does not need to be tech-savvy to anticipate the legal problem inherent in the word ‘intention’.

Aside from very pragmatic questions - such as 'When does a thought become an intention?' and 'How would a system determine this?' - the use of ‘intention’ in legal acts usually denotes a form of mens rea. This is, however, considerably different from the context in which it is used here. Since any EU regulation is directly applicable in the legal systems of Member States this would raise significant problems. By way of example, the Bulgarian legal system does not provide for a legal definition or test to determine what constitutes this variety of intention under Bulgarian law. Unfortunately, similar inconsistency in the language is found peppered across the AI Act which, together with the lengthy and unnecessary complicated sentences, turns the draft into a very bad example for legislative technique. If it remains unaltered, this would be a significant departure from the rule of law’s fundamental principle that legal provisions should be clear and predictable, especially since it is a problem not limited to this particular Act.

Going back to the substantial legal rules established in the regulation, we can probably divide it roughly into four parts based on the different level of risk posed by AI systems and the chosen means of regulation. This is without prejudice to the provisions related to the establishment of new bodies on the national and European level, which play different roles and have different competences across all four types of AI regulation.

The first set of rules, laid down in Article 5, concern prohibited AI practices. The Commission believed that in some specific cases, the risk to human safety and fundamental rights is so great that no mitigation measures would be sufficient. Thus, placing on the market and putting into service of an AI system that for example ‘deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ is not allowed. This is rather confusing because the phrase ‘materially distort a person’s behaviour’ is not defined. In fact, this seems more like a spin-off of the ‘material distortion of the economic behaviour of consumers’ criterion, which is well-known to consumer protection lawyers familiar with the Unfair Commercial Practices Directive. However, judging by the meaning implied in the AI Act, it seems that its use here is broader, but it is not clear precisely how much broader. It is concerning to prohibit AI practices EU-wide based on criteria that are anything but clear.

Another interesting example of prohibited AI practices concerns the much-debated issue of biometric identification. This topic has been discussed for quite a while. There are serious lobbying efforts advocating a full ban of AI-based biometric identification. It is not surprising that those lobbying for the ban are not happy with the currently proposed ban limited to ‘the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.’ Firstly, there are a number of exceptions related to necessity; for example, for the prevention of ‘specific, substantial and imminent threat to the life or physical safety of natural persons of a terrorist attack’. While these appear to be valid objectives in principle, the lack of a recognised uniform definition of what constitutes a ‘terrorist attack’ in both international and European law, coupled with the oft-intensive mens rea requirements, makes it hard to envision how law enforcement authorities would benefit from this exception in a uniform and compliant way.

Secondly, the definition of publicly available space as ‘any physical place accessible to the public, regardless of whether certain conditions for access may apply’ is very broad. When read in conjunction with Recital 9  of the AI Act, it becomes even less clear which spaces are publicly available.

Thirdly, unlike the other two prohibited practices, what is forbidden is ‘the use’ as opposed to ‘placing on the market, putting into service or use’. Interpret from this is that such ‘real time’ remote biometric identification systems could be manufactured and installed as a matter of principle, so long as they are not ‘used’ outside the scope of the exception. Naturally, this is very far from the much-touted total ban that was advocated for by human rights organisations, such as Amnesty International.4

Skipping the heavily regulated high-risk AI systems which are going to be explored in the second part of this piece (to be published in the next volume of the Technology Law Committee eBulletin), I would like to focus on the last two types of regulation prescribed by the AI Act. The first encompasses AI systems that interact in a unique way with humans and, therefore, require a high level of transparency, even if they are not considered high-risk proper under the regulation. This includes AI systems that interact with people, emotion recognition systems and systems that generate deep fakes. The transparency obligation aims to ensure that people are aware that they are interacting with a machine, that the system processes their emotions and/or that a certain content has been artificially generated. This is without prejudice to any additional requirements stemming from such AI being classified as high-risk.

For all remaining AI systems that do not classify as prohibited or high-risk, or do not require a high degree of transparency, the Commission proposes a voluntary approach through self-regulatory means, such as codes of conduct. The aim here is apparently to achieve the highest possible level of protection of fundamental rights by representing this voluntary approach as a competitive advantage that would supposedly boost innovation. Similarly, the regulation attempts to establish regulatory sandboxes for AI in Articles 53 and 54. Yet, it remains unclear how different national authorities will supervise said sandbox, what would happen to AI products or services involved in other sandboxes (eg, Fintech regulatory sandboxes) and, most importantly, what the main incentive for companies to participate, since regulatory leeway does not seem to be amongst the list of incentives.

Other notable provisions in the AI Act involve the establishment of a European AI Board, national competent authorities and the establishment of post-market monitoring, information sharing and market surveillance mechanisms.

The second part of this analysis will review the scope of the regulation regarding high-risk AI systems and offer a critical overview of the numerous requirements and obligations of both providers and users. It will also look at the penalty system for non-compliance within the regulation.

 

Notes

[1]   Article 114 deals with the establishment and functioning of the internal market.

[2] The values are described in Article 2 of the Treaty on Functioning of the European Union.

[3]  Case C-311/18, Data Protection Commissioner v Facebook Ireland Ltd and Maximilian Schrems.

[4]  www.amnesty.org/en/latest/news/2021/04/eu-legislation-to-ban-dangerous-ai-may-not-stop-law-enforcement-abuse.