The European Commission’s Ambitious Regulatory Agenda in the Field of Digital Services and AI: a status update from Brussels on the DSA, DMA and AI Act

Monday 2 August 2021

Hein Hobbelen
Bird & Bird, Brussels
hein.hobbelen@twobirds.com

Francine Cunningham
Bird & Bird, Brussels
francine.cunningham@twobirds.com

Clara Denihan
Bird & Bird, Brussels
​​​​​​​ciara.denihan@twobirds.com

Introduction

Declaring its ambition to apply ‘European values’ to the digital ecosystem, the European Commission is undertaking a highly ambitious regulatory agenda in the field of digital services and artificial intelligence (AI). Current proposals under discussion at EU level are designed to set global standards for the regulation of online platforms, digital services and AI systems, as well as ensuring that EU laws adopted in this area have extraterritorial reach. The EU appears to be emboldened by what it sees as the success of the General Data Protection Directive (GDPR) in establishing a global standard for protection of consumers and now aims to set the a global benchmark when it comes to regulating cutting edge technologies. Nevertheless, aspects of these proposals have been criticised as being misguiding by some industry leaders in the technology sector, such as former Google CEO Eric Schmidt who commented that the transparency provisions in the AI proposal did not reflect how machine-learning systems work in reality.

This article will examine three of the legislative proposals launched by the Commission, which are now subject to negotiation and amendment: the proposed Digital Services Act (DSA) and Digital Markets Act (DMA), as well as the proposal for an Artificial Intelligence Act (AI Act).

Extraterritorial effect

In terms of the scope of application of these proposals, it is important to note that all three proposals would have extraterritorial reach. The proposals recognise the global nature of the digital ecosystem by focusing not on the physical location of the company, but rather where its users are located. This highlights the Commission’s desire to expand its impact beyond companies within the EU and ensure that all EU consumers and businesses are protected from harmful technologies, products, content and practices. It also addresses concerns expressed by European companies during the consultation phase for these proposals, that they would be placed at a competitive disadvantage if only enterprises established within the EU were subject to the new rules.

The DSA will apply to all intermediary service providers, irrespective of their place of establishment, if they provide services within the EU. Whether they provide services in the EU is determined by a ‘substantial connection’ test. This is deemed to exist where the provider either has: an establishment in the EU; a significant number of users in one or more Member States; or targets activities towards one or more Member States.[1]

The DMA will apply to providers of core platform services who are designated as ‘gatekeepers’. Core platform services are defined as including online intermediation services, online search engines, online social networking services, video-sharing platform services, number-independent interpersonal communication services, operating system, cloud computing services, and advertising services. Such services may be all under the scope of the new rules irrespective of their place of establishment or residence, if their services are offered to business or end users located within the EU.[2]​​​​​​​

Meanwhile, the AI proposals will apply to AI systems placed on the EU market or affecting people located in the EU, including public and private actors. While both providers and users of high-risk AI systems will be subject to the requirements envisaged by the proposals, private, non-professional users of such systems will not.[3]

Companies that fall within the scope of the various proposals will be subject to far-reaching, and often new obligations.

State of play

All three files are subject to the ordinary legislative procedure, which requires agreement between the European Parliament and the Council for legislation to be adopted. Negotiations on all three files are underway, with the lead committee in the European Parliament being the Internal Market and Consumer Protection Committee in each case, with input from a range of associated committees and committees drafting opinions. A rapporteur has been assigned to each file, who is responsible for preparing a report on the draft legislation and eventually finding compromises from potentially thousands of suggested amendments. To date, a draft Report has been published by the Danish Social Democrat MEP Christel Schaldemose on the DSA and the German Christian Democrat MEP Andreas Schwab on the DMA. Both reports are currently scheduled for vote in Committee in November with a European Parliament plenary vote likely to take place in December 2021.

As the AI proposals were published by the Commission more recently, the Italian Democratic Party MEP Brando Benifei has yet to publish his draft report, but has already indicated that he may wish to go further than the Commission’s draft with regard to AI applications that could have an impact on fundamental rights. Below, we will briefly consider some of the most debated obligations likely to be imposed on companies which fall within the scope of one or more of these legislative files.

Increased obligations

Digital Services Act – key elements

Internet Service Providers, cloud services, messaging services, marketplaces, and social networks, among others, will fall under the scope of the DSA proposals, which aim to place more the responsibilities on online platforms connecting consumers with goods, services, and content. The DSA introduces several new obligations for online intermediaries, particularly regarding illegal content online.

Central provisions would require platforms to establish an enhanced ‘notice and takedown’ procedure, which would allow users to flag illegal content and dangerous products they encounter online, with providers who voluntarily take down illegal content suffering no negative consequences (ie, no loss of their ‘safe harbour’). MEP Schaldemose’s draft report suggests establishing strict timelines for acting on such notices, with time limits ranging from 24 hours to seven days, depending on whether the content is likely to affect public security, public policy or public health. The draft report also calls for platforms to suspend users who frequently publish ‘manifestly illegal’ content.

With regards to traceability of business users, the DSA states that online marketplaces should adopt a ‘know-your-business-customer’ (KYBC) approach, to help identify vendors of illegal goods. There will be an obligation for certain online platforms to receive, store and partially verify and publish information on traders using their services. MEP Schaldemose’s draft report proposes strengthening these obligations, by requiring the intermediary service provider to carry out additional due diligence checks.

The DSA aims to place further obligations on ‘very large online platforms’ (VLOPs), which are defined as online platforms with more than ten per cent of the EU population, or 45 million users. Such platforms need to put additional checks and balances in place to ensure that their systems are not manipulated and misused to spread harmful content or goods. Schaldemose’s draft Report suggests that VLOPs should be required to explain the functioning of their algorithms in order to ensure oversight bodies have the relevant information to assess compliance. There are also extensive requirements for very large platforms to increase transparency around online advertising and recommendation systems, with some MEPs controversially planning to push further to ban personalised online advertising completely.

Digital Markets Act – key obligations for ‘gatekeepers’

The DMA contains a set of presumptive criteria for qualifying a large online platform as a ‘gatekeeper’, including number of users, turnover thresholds and an entrenched and durable position over three years. However, MEP Schwab’s draft report suggests increasing such thresholds, with reference to a market capitalisation of €100bn rather than the €65bn in the Commission’s original proposal. Gatekeepers should also have a €10bn turnover in the last three financial years, rather than a turnover of at least €6.5bn in the last three financial years, which was also indicated in the original proposal. Schwab’s draft report also makes reference to the description of a core platform service as one which has more than 45 million monthly active end users located in the EU and more than 10,000 yearly active business users in the EU in the last financial year. Such amendments to the definition of a gatekeeper could lead to only a handful of US technology companies, and potentially one Chinese company, being covered by the proposals.

Under the Commission’s proposal, platforms that are designated as ‘gatekeepers’ will be subject to 18 ex ante obligations. Article 5 sets out a self-executing list of obligations that do not require further refinement by the Commission, whereas Article 6 contains a further set of obligations that may be subject to more specification or Commission guidance. These include a restriction on self-preferencing and requirements to allow for interoperability, data portability and access to performance measuring tools for advertisers.

With regard to self-preferencing or tying practices, the draft report goes further than the Commission’s proposal, suggesting that in order to avoid any conflicts of interests, gatekeepers should be required to treat their own products or services ‘as a separate commercial entity’.

The DMA has faced criticism for the absence of any effective provisions that would combat so-called ‘killer acquisitions’ whereby a large established company acquires a smaller, innovative start-up to pre-empt competition between the start-up and the incumbent. While Article 12(1) deals with mergers, requiring gatekeepers to notify Commission of any intended concentration, regardless of whether or not it would be ordinarily notifiable, many have noted that the provision ‘lacks teeth’ as the Commission will have no power to veto such a transaction. MEP Schwab’s draft report suggests an amendment to Article 12 to require gatekeepers to notify not only the Commission, but other competent national authorities of all their intended or concluded acquisitions of other providers of core platform services. However, it is worth noting the increased administrative burden this would place not only on platforms, but also on competition authorities, including the Commission’s Directorate General for Competition.

Artificial Intelligence Act – main provisions

In terms of its AI proposal, the Commission has adopted a risk-based approach, built around the concept ‘the greater the risk, the stricter the rule’. The proposal therefore differentiates AI systems into four categories:

  • unacceptable risks (including exploitative or manipulative practices and AI based social scoring carried out by public authorities), which are banned;
  • high risks which will only be allowed if they comply with certain mandatory requirements (including data governance, documentation and record keeping, transparency and human oversight);
  • low risks (including chatbots) which will be subject to specific transparency requirements; and
  • minimal risk systems which are not subject to any specific obligation as they are not considered to pose a threat to citizens’ fundamental rights.

Stakeholders have pointed to a lack of clarity and legal certainty around the definition of high-risk AI systems and the definition used for ‘subliminal techniques’ used in relation to illegal AI systems.

When it comes to the interaction of AI regulation and privacy concerns, a prime battlefield is likely to be the use of AI systems for facial recognition purposes. The European Data Protection Supervisor has already called for a moratorium on this controversial technology and some MEPs are ready to advocate a complete ban.[4]

Due to alleged vague wording in Recital 9, an additional open question is whether the new rules will apply to AI systems embedded in online platforms. If so, the interaction of these proposals with the DMA and DSA will be interesting to examine.

The next steps

It is important to note that the DSA, DMA and AI Act will all take the form of regulations, as opposed to directives, so they will be directly applicable in EU Member States. With the European Parliament and Council discussing and amending these proposals in parallel, it is likely to be at least two years, and quite possibly longer, before a final consensus is reached and these new rules take effect. An additional proposal for an EU Data Act is due later in 2021. This could introduce additional obligations for private companies to open up access to their data to new entrants to the market. Companies operating in the digital sphere therefore face a future of high regulatory challenges and the risk of hefty fines if they fall short. Overall, there is industry concern that these proposals, while well-intentioned, could have the effect of shifting away resources from investment and into compliance.

 

Notes

[1] Article 1(3) of the proposed Digital Services Act.

[2] Article 1(2) of the proposed Digital Markets Act.

[3] Article 2(1) of the proposed Artificial Intelligence Act.

[4] European Data Protection Supervisor Report.