Navigating the maze: implications of the EU’s AI Regulatory Framework for the electronic communications sector

Monday 24 July 2023

Magda Cocco
Veiera de Almeida, Lisbon; Newsletter Officer, Communications Law Committee
mpc@vda.pt

Iakovina Kindylidi
Veiera de Almeida, Lisbon
imk@vda.pt

Introduction

Artificial intelligence (AI) continues to shape the future of digital interactions and the connectivity market. This rapid proliferation of AI technologies underscores the need for a comprehensive regulatory framework. The proposed Artificial Intelligence Act (AIA), coupled with the Proposal for Product Liability Directive[1] (PLD II’) and the Proposal for AI Liability Directive,[2] mark a pivotal shift in the legal landscape, both in the EU and globally.

As a major user of AI applications, the electronic communications sector is squarely in the crosshairs of these proposed changes. To better understand the transformative impact of emerging technologies such as AI in the sector, earlier this year, the European Commission launched a public consultation[3] on the future of electronic communications. Although the results of the public consultation, which was concluded in May 2023, have not yet been published as of June 2023, the responses of the Body of European Regulators for Electronic Communications (BEREC) are noteworthy. Furthermore, specifically referring to the impact of AI in the sector, BEREC also published its Draft Report on challenges and benefits of AI solutions in the telecommunications sector (BoR (22) 191)[4] (‘Draft Report’), in December 2022.

In this regard, this article aims to highlight the possible impact of the upcoming AI Regulatory Framework on the electronic communications sector and to support the discussion on possible measures that this sector can implement to better navigate the regulatory maze and ensure the future-proof compliance of its AI systems.

The upcoming EU AI Regulatory Framework: an overview

On 21 April, the European Commission presented its much-anticipated Proposal for a Regulation on a European Approach for Artificial Intelligence.[5] This Proposal advocates for a uniform set of rules to regulate AI. It marks the first comprehensive regulatory initiative focused on AI globally, aimed at promoting the development and adoption of safe AI within the EU. Concurrently, it seeks to protect the fundamental rights of EU citizens and establish the EU as a global trailblazer of AI regulation, mirroring the EU’s role in setting the gold standard for data protection with the General Data Protection Regulation (GDPR).

While primarily a European affair, the AIA reflects the ‘Brussels effect’, where the EU extends its regulatory influence beyond its borders by setting standards and requirements impacting entities operating outside the EU. The AIA has a broad territorial scope, extending to AI providers and users outside the EU when their AI system is used in the European market, or when the outcomes produced by the system are used in the EU. This externalisation of the EU approach to AI regulation could significantly impact the development and deployment of AI systems worldwide.

Moreover, although its scope is horizontal, the AIA follows a risk-based approach. It sets out different obligations for various AI providers and users, depending on the risks that their AI systems pose to the health, safety and fundamental rights of individuals. More specifically, it categorises AI systems based on their risk level, as follows: unacceptable, high, limited/specific, and minimal.

  1. Unacceptable risk: AI systems that are incompatible with EU standards and are therefore completely prohibited.
  2. High risk: AI systems that may pose risks to individuals but can be designed or deployed in the Single Market, provided they comply with the obligations outlined in the AIA.
  3. Limited/specific risk: certain AI systems that, due to their close interaction with individuals and their potential impact on their perception, are subject to additional transparency obligations. High-risk AI systems may also be subject to this specific transparency obligation. The specific risk systems category includes foundation models. Following the legal and ethical discussion around ChatGPT, the current wording[6] of the AIA imposes specific obligations on such large machine learning models.
  4. Minimal risk: although outside the AIA’s scope, these systems still need to comply with the overall regulatory framework. They can also adopt some of the AIA obligations as best practices.

Most obligations are placed on high-risk AI providers. These include adequate risk assessment, conformity checks, record-keeping, transparency duties, among others. Severe penalties, up to seven per cent of global annual turnover or €40m, whichever is higher, are also stipulated.

The AIA is further complimented by the proposed PLD II and AI Liability Directive, published in September 2022. Both proposals follow the AIA’s logic, with the PLD II aiming to modernise existing rules on manufacturers’ strict liability for defective products. In contrast, the AI Liability Directive establishes uniform liability rules for damages caused by AI systems, providing broader protection for victims and easing access to information and the burden of proof.

Combined, these proposals aim to promote trust in AI systems, protect fundamental rights and values, and foster innovation and competitiveness in the EU market. They employ a risk-based methodology to ensure AI systems are developed and used responsibly, transparently, impartially, fairly, safely and ethically, while also respecting end users’ privacy.

AIA and AI systems in the electronic communications sector

The electronic communications sector is leveraging AI systems to improve network optimisation, predictive maintenance, virtual assistants and fraud detection, among others.[7] These systems are being developed or acquired and implemented throughout the value chain to enhance the resilience and efficiency of their infrastructure, products and services.

On 14 June, the European Parliament reached an agreement on a series of amendments to the original draft of the AIA, published in April 2021. One of these amendments consists in updating and expanding the scope of Annex III, the numerus clausus list of high-risk AI systems.

In its current wording, the AIA expressly considers high-risk AI systems to be those intended for use as safety components in managing and operating the supply of water, gas, heating, electricity and critical digital infrastructure. Although the AIA does not provide a definition for ‘critical digital infrastructure’, it seems to align its wording with that of the NIS2 Directive. Therefore, critical digital infrastructure entities will be those mentioned in Annex I of the NIS2 Directive:

  1. internet exchange point providers;
  2. domain name system service providers, excluding operators of root name servers;
  3. top-level domain name registries;
  4. cloud computing service providers;
  5. data centre service providers;
  6. content delivery network providers;
  7. trust service providers;
  8. providers of public electronic communications networks; and
  9. providers of publicly available electronic communications services.

Thus, considering that electronic communication service providers assume different roles and provide different types of services and products in the digital ecosystems, and irrespective of any specific AI systems used by them (eg, virtual assistants, biometrics etc), which will be subject to specific obligations under the AIA, at least any AI systems used in the services falling within the scope of critical digital infrastructure, as defined above, will be deemed high-risk. In this regard, it is important to note that the BEREC Draft Report does not address which AI systems used by electronic communication entities will be classified as high-risk.

This should also be taken into consideration in cases where the critical digital infrastructure provider, as defined above, is an AI user rather than the provider. Even in these cases, and without prejudice to situations where the AI user may be considered a provider under the AIA, the AI user should assess the AI system’s conformity acquired during the AI procurement process.

Furthermore, PLD II broadens the concept of ‘product’ to include services and digital products. It introduces a ‘risk-based liability’ concept, whereby producers of high-risk AI systems could be held liable for damage, even without fault. The same approach is followed in the Proposal for AI Liability Directive, in the case of AI users. This may also impact entities in the electronic communications sector, particularly in relation to deployed AI systems. It should be noted that in their current wording, the proposals do not introduce an exception for the use of AI in the context of electronic communications, without prejudice to a general disclaimer stating that these proposals do not affect the existing liability rules, including the liability rules of the Digital Services Act,[8] where applicable. Therefore, there is a need for further alignment with the applicable EU Consumer Protection Framework.

Additional obligations: enhancing data protection, safety and cybersecurity

The proposed AIA also emphasises the need for data governance, cybersecurity, safety and robustness of AI systems. These measures echo the principles of the GDPR and the ePrivacy Directive, as well as those of the EU cybersecurity framework, especially NIS2, and the consumer protection and product safety framework, particularly the Radio Equipment Directive and Low Voltage Directive, as they relate to the electronic communications sector.

More specifically, there are provisions on the quality and safety of input data and on the detailed reporting of datasets used in the early stages of the AI lifecycle, their sources and purposes, to meet the transparency obligation set out in the AIA. Therefore, electronic communications sector businesses dealing with vast amounts of user data will need to reassess their data governance policies to maintain business integrity and avoid legal repercussions.

From a safety and cybersecurity standpoint, AI systems should be technically robust and safe to minimise unintended and unexpected harm and to prevent unlawful use by malicious third parties, in line with the consumer protection and product safety framework. Thus, these ex ante obligations, along with the reinforced civil liability framework ex post, will require entities of the electronic communications sector to update their internal policies and contracts with third parties.

However, given that the electronic communications sector is already heavily regulated, there is an enhanced need to align the various obligations arising from the different horizontal and sector-specific regulations. This alignment would help ensure compliance by the players of this sector, avoiding the duplication of efforts and resources or so-called gold-plating practices that may hinder innovation and AI adoption in the sector.

Conclusion and next steps

In conclusion, the proposed AIA and PLD II introduce a new wave of compliance requirements and liability provisions that profoundly impact the electronic communications sector. Additionally, aligning the obligations and principles set out in the new AI framework with existing frameworks, especially concerning data protection, privacy, consumer protection, product safety and cybersecurity, is crucial.

Businesses and legal professionals in the sector must remain vigilant, embracing these changes as stepping stones toward a transparent, responsible and robust AI future.

Following the approval by the European Parliament of the amendments to the AIA, the trialogue discussions between the European Commission, the European Parliament and the European Council have commenced. The Commission aims to conclude negotiations as quickly as possible and publish the AIA by the end of 2023 or beginning of 2024. Once in force, AI stakeholders will have two years (with some exceptions) to comply with the set obligations. The upcoming regulatory framework will undoubtedly require an adjustment period; however, it also represents an opportunity to foster trust and reliability in the rapidly growing AI ecosystem.

 

[7]           See Chapter 5 of the Draft Report on challenges and benefits of Artificial Intelligence (AI) solutions in the telecommunications sector (BoR (22) 191).