The EU’s draft regulation on artificial intelligence (part 2)

Monday 13 December 2021

Katerina Yordanova
KU Leuven Centre for IT and IP Law, Leuven
katerina.yordanova@kuleuven.be

Introduction

The first part of this analysis[1] focused on presenting a general critical overview of the draft Artificial Intelligence Act and, in particular, the proposed regulation of banned artificial intelligence (IA) practices and certain AI systems with additional transparency requirements.

This second part will shed light on the heavily regulated high-risk AI systems, the requirements and obligations they encompass, and the penalties in place for non-compliance.

What is a high-risk AI system?

Article 6 of the draft AI Act provides a rather confusing definition of which AI system shall be considered high-risk. On one hand, these are AI systems which are ‘intended to be used as a safety component of a product’ or are themselves a ‘product, covered by the EU harmonisation legislation listed in Annex II’.[2] In addition, these AI systems, or the products they are part of, are required to ‘undergo a third-party conformity assessment with the view to the placing on the market or putting into service’ of these products under the conditions of the EU legislative acts listed in the aforementioned Annex II.

Another type of high-risk AI systems are those that fall under one of the categories listed in Annex III. Probably the most notable and discussed such category are AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons. As mentioned in Part 1, a number of stakeholders, especially from civil society, have been advocating a total ban on the use of AI for biometric identification which is currently considered a prohibited AI practice only in the narrow case of real-time remote biometric identification in publicly accessible spaces and for the purpose of law enforcement, subject to a few exceptions.

As Recital 33 of the draft AI Act identifies that technical inaccuracies in those type of high-risk AI systems are particularly problematic because they could lead to biased results and discrimination, the draft regulation prescribes these systems to be subject to specific requirements on logging capabilities, as well as human oversight. Such measures are complementary to the requirements already established in the legal instrument for high-risk AI systems. More worrisome from a business perspective however, are the cost estimates that the Commission itself provided with respect to human oversight criteria. They are estimated at around €5,000 to €8,000 per year for the AI users, and these figures do not include the potential loss of investments due to additional regulatory burden and costs.

Other types of high-risk AI systems that are of particular importance to the business and the sector are those ‘intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity’. As well as AI systems used in the context of employment, workers' management and access to self-employment which includes, for example, using AI systems for recruitment purposes or for making decisions regarding promotions or terminations. Both types could have a significant impact on human rights, varying from the right to life and health in the case of management and operation of critical infrastructure, to the right of equality and non-discrimination.

A third group of high-risk AI systems are those used for access to, and enjoyment of, essential private services and public services and benefits, such as AI systems being used by public authorities to assess someone’s eligibility for benefits, or AI systems used for determining access or assigning natural persons to educational and vocational training institutions and assessing students in such institutions.

Finally, Annex III designates as high-risk AI systems those used by law enforcement for various purposes, such as detecting someone’s emotional state in order to be used as a lie detector. This particular use of AI systems was also considered in relation to their exploitation for the purpose of migration, asylum and border control management. The final category of high-risk AI systems includes those intended to ‘assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts’. It is worth noting that AI systems intended for purely ‘ancillary administrative activities’, which do not affect administration of justice on the level of an individual case, do not fall into this category.

Requirements for a high-risk AI system?

AI systems considered high-risk according to article 7 of the draft AI Act need to comply with a set of requirements enlisted in Chapter II and designed to mitigate the said risk.

First, there is a requirement to establish a risk management system that shall ‘consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating’. Article 9 determines four steps comprising the risk management system: (1) ex ante identification and analysis of risk associated with a certain AI system; (2) estimation and evaluation of emerging risks when the AI system is used according to its intended purpose or in the conditions of ‘reasonably foreseeable misuse’; (3) analysis of risk based on data gathered from the post-market monitoring system under the Regulation; and (4) adoption of risk management measures. These measures are required to ‘give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2’. This means that measures taken, for example, to satisfy the transparency requirement under article 13 need to also consider and be compliant with the cybersecurity requirement in article 15. Article 9 further recommends testing of high-risk AI systems in order to ensure effectiveness of the chosen risk management measures which could be done through different means, one of which being in the framework of a regulatory testing environment.

Second, high-risk AI systems that ‘make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets’ that are required to meet certain criteria. Article 10(2) prescribes that training, validation and testing data sets shall be subject to appropriate data governance and management practices, including relevant design choices, examination in view of possible biases, etc. Article 10(3) provides further requirements towards data sets which shall be ‘relevant, representative, free of errors and complete’. It is important to note that failure to comply with these requirements could result in fines of up to €30,000,000 or six per cent of the total worldwide annual turnover for the preceding financial year, whichever is greater. As this is two per cent higher compared to the fines envisioned in the General Data Protection Regulation (GDPR) it has therefore given rise to a serious discussion among industry actors due to the absolute and misleading definition of data quality and the obvious lack of data quality metrics. In addition, article 10(5) of the draft AI Act introduces an exception to the otherwise banned processing of certain categories of personal data under article 9(1) of the GDPR for the purposes of ensuring bias monitoring, detection and correction. The link between the two regulations needs to be clarified taking into consideration that article 10(5) of the draft AI Act introduces an additional exception to the otherwise numerous clausus list of article 9(2) of the GDPR.

Articles 11 and 12 are focused on traceability and transparency by establishing requirements for technical documentation and record-keeping for high-risk AI systems. The latter takes the form of the so-called logs which means that the system itself needs to be designed in a way that allows automatic recording of events.

Another requirement for a high-risk AI system is transparency as established in Article 13 in the sense of ‘transparency and provision of information to users’. Furthermore, the same article prescribes that high-risk AI systems shall be designed and developed in such a way as ‘to enable users to interpret the system’s output and use it appropriately’. The wording here is very confusing because if we go back to the work of the High Level Expert Group on AI,[3] which was claimed to be in the basis of Chapter 2 according to the Explanatory Memorandum accompanying the draft AI Act, transparency, considered as part of the key requirements for Trustworthy AI, encompasses three elements: traceability, explainability and communication. As already discussed, traceability was established as a requirement in articles 11 and 12, although that means it was taken out of the context of transparency which is separately regulated in article 13. The communication element was also separated and represented in article 52 which was examined in detail in part 1 of this analysis. This means that, according to the logic of the draft AI Act, transparency equals explainability. Recognising that there is no unified formal definition of explainability in computer science,[4] the one proposed by HLEG[5] concerns the ability to ‘explain both the technical processes of an AI system and the related human decisions’. This definition differs from the goal of article 13 and it is also unclear whether it includes only explainability or also interpretability, granted there is a dispute regarding the difference between the two terms.[6] It is also worth noting that enhancing explainability and therefore transparency might reduce accuracy and compromise cybersecurity.[7] This ambiguity is very concerning due to accuracy and cybersecurity being defined as requirements (together with robustness) in article 15.

The requirement that high-risk AI systems need to be ‘designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle’ was met with scepticism on several grounds. First, the word ‘appropriate’ can include very different state of the art during the lifecycle of certain AI systems. Second, it is unclear what happens with this requirement if, for example, the provider ceases to exist and therefore stops releasing appropriate updates. Third, AI systems are complex systems, and a certain provider does not necessarily have control and awareness regarding vulnerabilities incorporated in an element of this system, for instance, specific libraries used.

The last requirement is human oversight which is, in essence, the requirement that AI systems are ‘designed and developed in such a way […] that they can be effectively overseen by natural persons during the period in which the AI system is in use’. This is to be achieved through measures that enable individuals to whom oversight is assigned to do an extensive list of things including ‘fully understand the capacities and limitations of the high-risk AI system’ and ‘correctly interpret the high-risk AI system’s output’. Besides the cost of this particular requirement, which was discussed earlier, the list contained in article 14(4) contains a set of skills and professional qualifications necessary to perform all these tasks, that are rare to find and altogether lacking from the pool of talent. Either way, it will probably result in non-compliance or pro forma compliance which is very far from the spirit of the law.

Conclusion

In conclusion, the overall goal behind adopting this risk-based approach towards AI systems, namely balancing innovation and protection of human rights and fundamental freedoms, would not be sufficiently achieved through the text of the draft Regulation as it currently stands. There is a dire need for consistency, clarity and proper legalistic rules in order to turn the draft AI Act into a successful endeavour.

 

[1] Katerina Yordanova, The EU’s draft regulation on artificial intelligence (part 1), IBA Technology Law Committee, 25 June 2021, see https://www.ibanet.org/June-2021-EU-drft-regulation-ai accessed 2 December 2021.

[2] Annex II contains a list of EU instruments such as Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery (currently in the process of being replaced by a regulation), Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys, Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment, etc.

[3] ‘HLEG, Ethics Guidelines for Trustworthy AI’, 2019, see n 5 below.

[4] Sheikh Rabiul Islam, William Eberle, Sheikh K Ghafoor, ‘Towards Quantification of Explainability in Explainable Artificial Intelligence Methods’, 2020, AAAI Publications, The 33rd International Flairs Conference.

[5] High-Level Expert Group on Artificial Intelligence (HLEG) is a group of 52 experts bringing together representatives from academia, civil society, as well as industry appointed by the EU Commission to support the implementation of the European Commission’s European Strategy on Artificial Intelligence.

[6] Ronan Hamon, Henrik Junklewitz and Ignacio Sanchez, ‘Robustness and Explainability of Artificial Intelligence’, 2020, JRC Technical Report.

[7] Bruce Schneier, ‘The Coming AI Hackers’, 2021, The Cyber Project, Council for the Responsible Use of AI, Belfer Center for Science and International Affairs.