Complying with the EU’s digital agenda

Neil HodgeWednesday 23 November 2022

A raft of new EU legislation will have a significant impact on the tech sector and companies’ use of artificial intelligence. In-House Perspective assesses how in-house teams will be affected and how they can prepare.

For over a decade the European Union has sought to create robust rules to stimulate growth in artificial intelligence (AI) as part of its digital agenda. Beginning in autumn 2022 and continuing into 2024, a raft of new legislation will seek not only to help EU businesses explore opportunities presented by the technology, but also to rein in Big Tech companies whose dominant position and use of data have clashed with the EU’s data privacy and competition laws. 

Collectively, the EU’s Artificial Intelligence Act (‘AI Act’), the EU Digital Markets Act (‘DMA’) and the EU Digital Services Act (‘DSA’) are intended to harmonise the bloc’s digital market and protect consumers from any unintended harm that AI technologies and machine learning may create. As a consequence, the legislation primarily targets Big Tech companies, who, due to their scale and reach, have the potential for the greatest harms. It also aims to level the playing field so new companies can enter the market without being swallowed up by larger, established rivals. More widely, the rules will also allow businesses to flag up unfair practices, such as the abuse of targeted advertising to drive sales from customer data without users’ knowledge or consent.

Going for the gatekeepers

The European Commission – the EU’s executive body – published its draft proposal for the AI Act in April 2021. The legislation seeks to remedy existing fragmentation in AI regulation across the EU, as well as address concerns around potential risks posed by the unregulated use of AI-based technologies. The legislation is ‘industry neutral’ and has extraterritorial application.

The AI Act follows a risk-based approach and regulates AI systems in accordance with the level of risk/harm they present. There are four bands: ‘minimal’ risk, where the risk of harm is so low such systems are not regulated under the Act; ‘limited’ risk, where the systems are subject to certain transparency requirements, but they should not pose any serious level of harm; ‘high risk’ AI systems, which could include those that assist with recruiting or managing employees, use biometric identification or evaluate consumer creditworthiness; and ‘unacceptable’ risk – such systems deemed so potentially harmful they are simply banned from use.

Sanctions for non-compliance are tough. Companies breaching the rules face fines up to six per cent of their global turnover or €30m – whichever is the higher figure – depending on the severity of the non-compliance.

The DMA, meanwhile, aims to regulate online digital platforms designated as ‘gatekeepers’ – essentially, large/dominant social media or cloud computing companies with 45 million or more active monthly users in the EU, revenues from business in the bloc of at least €7.5bn or a market capitalisation of at least €75bn. The DMA imposes several limitations on how gatekeepers may process data; determines the implementation of interoperability interfaces; and enhances consumers and business users’ rights. Again, the sanctions for non-compliance are significant. The European Commission will be able to impose penalties and fines of up to ten per cent of a company’s worldwide annual turnover and up to 20 per cent of such turnover for repeated infringements.

Daniel Lundqvist, Chair of the IBA Internet Business Subcommittee and a partner at Kahn Pedersen in Stockholm, says the DMA aims to make the digital sector fairer and more competitive. ‘The proposed DMA includes provisions which, under certain conditions, prohibit gatekeepers from using personal data in various ways, such as by combining personal data from one core platform service with personal data from any further core platform service, any other service provided by the gatekeeper, or from a third-party. These sorts of provisions may very well have an impact on gatekeepers’ operations and their use of AI, in particular their use of targeted ads,’ he says. 

For Daniel Dohrn, Co-Head of the Compliance Group at law firm Oppenhoff in Köln, the heart of the DMA are the principles of ‘self-assessment’ and ‘self-execution’. Accordingly, he says, compliance with the legislation requires a high level of technical understanding of gatekeepers’ digital business models. In-house lawyers should therefore understand how their employers’ algorithms work. ‘The European Commission has already made it clear on several occasions that “black box” scenarios – namely, defending oneself by saying that one could not have foreseen what the effects of the algorithm would be – are not acceptable. The Commission assumes that gatekeepers have full control over their AI. In-house and external lawyers will therefore not only have to know the law in future, but also have a profound knowledge of technical processes to ensure compliance,’ he says.

“In-house and external lawyers will not only have to know the law in future, but also have a profound knowledge of technical processes to ensure compliance


Daniel Dohrn, Co-Head of Compliance Group, Oppenhoff

The DSA, meanwhile, imposes new obligations on online intermediaries – such as hosting services providers and online platforms – in relation to user-generated content made available through their services. The DSA maintains the principle that intermediaries and platforms are exempt from liability related to the online dissemination of user-generated content, so long as they comply with content moderation obligations and take down any illegal content detected on their services ‘without undue delay’. The DSA also has robust sanctions. The European Commission can impose fines of up to six per cent of the global turnover of a service provider and can require immediate action where necessary to address very serious harms, forcing platforms to offer commitments on how they’ll remedy them. For rogue platforms that refuse to comply, the Commission can ask a court for a temporary suspension of the service as a last resort.

The DSA partly replaces and builds on the EU’s 20-year old e-Commerce Directive, but establishes a series of new fundamental rules and principles around transparency, co-operation with national authorities, complaint handling systems and terms and conditions. Lundqvist explains that this means, for example, that online platforms will need to be more transparent about their measures and systems for automatic content moderation. In addition, the proposal includes several new requirements and obligations to detect, identify and address illegal content. As such, Lundqvist says, those tech companies liable under the legislation need to ensure strict compliance.  

William Long, global co-leader of law firm Sidley’s privacy and cybersecurity practice and head of its EU Data Protection group, based in London, says tech companies will need to ‘determine whether they fall within scope of one or more of these digital laws’ and begin to assess what effect the legislation may have on the business and what the impact may be from a compliance, product and resources perspective.

Bracing for impact

The DMA came into force on 1 November and gatekeepers are expected to comply with its obligations and requirements by 1 May 2023. The DSA entered into force on 16 November 2022 and will take effect from early 2024. The AI Act, meanwhile, is not expected to become law until late 2023 or 2024. Even once it becomes binding, there will likely be a grace period of potentially 24-36 months before the main requirements come into force.

Lundqvist says the rules ‘will most likely lead to an increased predictability and increase the possibilities for providers to expand their business within the EU’. On the other hand, he says there are fears the regulations could stifle innovation. ‘Complying with the DSA and DMA will in many cases be expensive, which may prevent new providers from entering the EU and thereby closing that market to new innovative services. It could also be argued that the obligation in the DMA to share data and grant access to rivals might make it more attractive to become more passive and replicate market leaders rather than being innovative.’

Caroline Carruthers, CEO and co-founder of global data consultancy Carruthers and Jackson, agrees there are concerns that the legislation could stifle innovation rather than nurture it. ‘The problem with legislation in this area in general is that officials are putting laws in place to try to solve very particular problems, but they don’t necessarily think through the unintended consequences of the regulation,’ she says. ‘Legislation often forces organisations in the AI, machine learning and big data spaces to go down a specific route when it comes to development, but this can of course stifle innovation and create a regimented system where companies may fail to develop breakthroughs at the pace of less regulated regions, such as China.’

While there’s little doubt that the legislation is aimed squarely at ensuring Big Tech companies play by the rules and are easier to hold accountable, companies more generally – those using the tech rather than developing it or providing it – are also legally liable for AI use. While the AI Act, DSA or DMA may not apply to them directly, companies using AI and machine-learning solutions as part of their operations or decision-making processes could still find themselves open to claims and complaints under the EU’s General Data Protection Regulation (GDPR) if they are using EU citizens’ data – and subject to similarly eye-watering fines. 

There’s a real danger that businesses mistakenly ignore the rules. Christoph Krück, a senior associate at German law firm SKW Schwarz, says while the AI Act is mainly aimed at providers of AI systems, ‘other parties may also be obligated: product manufacturers, importers, distributors, users or other third parties’. As a result, he says, ‘in-house lawyers must therefore address whether their company or the products used and/or distributed fall within the scope of the regulation and, if so, which compliance provisions could become relevant’.

If the company’s products or services are within scope, Krück says the organisation must set up an appropriate compliance and risk management system, tailored to the products. The company must also create a culture of awareness – including establishing internal guidelines and training in key departments such as production and marketing – so that compliance is ensured. 

The regulation is not without its controversies. Krück says the AI Act’s scope of application is broad, will capture a wide spectrum of software products and will compel users of AI technologies to comply with certain obligations, in particular if the company exercises control over the input data. He also warns that companies based outside the EU should remember that the legislation will also likely apply to them, given its extraterritorial reach and level of enforcement. Further, he says, it’s expected we’ll see the so-called ‘Brussels effect’ in action, with the AI Act becoming a template for other countries to follow and adopt in the same way as the GDPR has arguably become the ‘gold standard’ of data protection worldwide. 

Other criticisms of the AI Act are that it’s too rigid and offers too little room for differentiation. While it pursues a risk-based approach, ‘the risk levels could be more differentiated in order to be able to distribute the high transparency, security and control requirements in an even more balanced manner,’ says Krück.

Krück also warns of ‘regulation overlap’ contributing to a lack of clarity about how the rules work in practice and concerns about whether they conflict with other legislation. ‘The relationship between the AI [Act] and the GDPR is not clear in some cases, such as with regard to the processing of personal data for the training, validation and testing of AI systems’ and this may lead to legal uncertainty, he says.

The heart of compliance

Companies using AI should consider establishing a comprehensive AI risk management programme integrated within their business operations. Such a programme should include an inventory of all AI systems used by the organisation; a risk classification system; risk mitigation measures; independent audits; data risk management processes; and an AI governance structure.

According to business consultancy McKinsey, companies should also be clear about the purposes they want to use AI technologies for and have clear reporting structures that allow for multiple checks of the AI system before it goes live. And given that many AI systems process sensitive personal data, companies should have robust, GDPR-compliant data privacy and cybersecurity risk management protocols in place.

Those systems or processes that use customer data to facilitate AI decision-making could be particularly vulnerable to producing outcomes that could be regarded by a regulator as ‘harmful’. The financial services industry is particularly vulnerable to such challenges. Robert Grosvenor, a managing director with management consultancy Alvarez & Marsal’s disputes and investigations practice, warns that in-house counsel in financial services companies will need to review how any AI systems are used and what outcomes they produce. ‘AI-generated content or use of automated decision making in the calculation of online financial products or product pricing, for example, will require its own controls and monitoring to ensure that outcomes are fair and appropriate,’ he says.

To ensure compliance, Carruthers says the first point companies need to understand is whether anything in their organisations will have to directly change because of the legislation. Her advice: follow the spirit of the law rather than the letter. ‘Understanding how to approach data ethically, and not waiting for legislation to explicitly tell you how to act appropriately, will ensure organisations can futureproof their compliance,’ she says. ‘When it comes to data, and AI specifically, legislation will only continue to evolve as the AI and machine-learning sectors continue to innovate at pace. Instead of planning to just about cross the compliance line, businesses should be changing their behaviour to far exceed it.’

“When it comes to data, and AI specifically, legislation will only continue to evolve as the AI and machine-learning sectors continue to innovate at pace


Caroline Carruthers, CEO and Co-Founder, Carruthers and Jackson

Lundqvist says the impact of the legislation will depend on the role the company has –whether it’s a developer or user of the technology. In some cases, he suggests, this ‘will require a rather complicated assessment in which both external counsel or other expertise may need to be consulted’.

For businesses where the role assessment indicates that the impact will be significant, Lundqvist recommends that in-house counsel alert and notify the company’s product development teams about the new rules as soon as possible. ‘In my experience, it is crucial to be proactive rather than reactive to any new regulation, and this certainly includes the DSA and DMA,’ says Lundqvist. ‘Since many companies are constantly developing and evolving different types of AI solutions, in many cases it will be possible to implement necessary changes and processes in the design and development phase of a product or service. It is, of course, much easier and cheaper if you can “build-in” and integrate compliance measures and functionality into a product or service compared to later implementing changes.’

“It is crucial to be proactive rather than reactive to any new regulation, and this certainly includes the Digital Services Act and the Digital Markets Act


Daniel Lundqvist, Chair, IBA Internet Business Subcommittee

In addition, says Lundqvist, legal and compliance teams may need to update any existing operational and compliance risk assessments, as well as take necessary measures to ensure that management and other stakeholders in the company are updated on the legislation and the expected impact on the business, particularly regarding increased liability.

Angela Busche, a partner in IT law and data protection at law firm Oppenhoff, says ‘in-house lawyers should function as legal advisor and strategic partner of both the business and the compliance department’. In particular, she says, they should analyse the organisation’s business as regards the applicability of the AI Act and classify the relevant business conduct according to the legislation’s risk scheme.

She believes the reliable classification of an AI system is ‘the starting point for any compliance strategy and management’, but also remains a ‘crucial challenge’. She adds that in-house lawyers should identify the organisation’s specific obligations and requirements under the AI Act and assist in helping with the additional compliance commitment.

To avoid legal uncertainty, Busche says it may be useful for in-house lawyers to ‘assume their organisations qualify as providers of high-risk AI systems from the outset’. As such, counsel should offer advice regarding the risk management, data governance and transparency around AI systems, as well as advice on technical documentation, record-keeping, ex-ante conformity assessment procedures, quality management systems, corrective actions plans and the technology’s assessment before being placed on the market – as required to obtain a CE marking.

Busche also says in-house lawyers should assess whether the organisation conducts AI practices that, under the AI Act, will be prohibited. If so, in-house counsel should inform stakeholders and support them in developing alternative approaches in compliance with the future regulation.

There’s little doubt that AI is a powerful tool that will drive future business, but there are significant legal risks to those organisations that use the technology as well as to those that develop it. While the EU’s legislation focuses on holding Big Tech companies more readily to account, companies more generally should be aware that they’re also likely to have obligations to protect employees, customers and other stakeholders too under the rules – as well as under other legislation such as the GDPR.