AI based medical devices
Dr David Shaverdov
D+B Attorneys at law, Berlin, Germany
shaverdov@db-law.de
Dr Tobias Volkwein
D+B Attorneys at law, Dusseldorf, Germany
volkwein@db-law.de
Dr Ulrich Grau
D+B Attorneys at law, Berlin, Germany
grau@db-law.de
Introduction
In healthcare facilities, complex processes and high workloads shape everyday life. The use of artificial intelligence (AI) is intended to support optimised processes, relieve the burden on employees, and not only improve the quality of care, but also individualise it.
What is of particular relevance is the use of deep learning models in the analyses of imaging results, such as magnetic resonance imaging (MRI), computed tomography (CT), and mammography.
To organise the innovative potential of AI systems and establish sufficient security, on 12 July 2024, the EU published the new Regulation (EU) 2024/1689 in the Official Journal (OJ) of the European Union establishing harmonised rules for artificial intelligence (hereinafter ‘AI Act’ or ‘AI Regulation’). It entered into force on 1 August 2024. The regulatory concept was developed from the New Legislative Framework (NLF).
This article provides an overview of the current regulations for AI-based medical devices and outlines national efforts to implement the European requirements. It also presents the Commission's far-reaching legislative proposals for the applicability of the AI Act, the simplification of the MDR Regulations and highlights their consequences.
Application
For the scope of application of the AI Regulation, a distinction must be made between the temporal, applicable, and territorial regional scope.
Temporal applicability
After the AI Act entered into force on 1 August 2024, it was implemented in stages. Chapter I and II (General Provisions and Prohibited AI Practices) have been in force since 2 August 2025.
From 2 August 2026, with a few exceptions, all requirements of the AI Act must be implemented for high-risk AI systems (AI Act, Article 113). All regulations should be in effect by 2 August 2027 at the latest. However, this does not apply to AI-based medical devices, for example. In this case, as things stand at present, the AI Regulation will not apply until 2 August 2027.
The AI Act classifies the various machine learning models into four categories based on the risk they pose to the public, with high-risk models subject to the most restrictive regulations. The classification as a high-risk AI system is based on Chapter III, Article 6(1) and (2), in combination with Annex III of the AI Act. A high-risk AI system is defined as one that: (1) is itself a product covered by the Union’s harmonisation legislation listed in Annex I (such as the MDR) or is used as a safety component of such a product; and (2) is subject to conformity assessment by a notified body.
In contrast, low-risk AI systems – such as systems for creating text, images and audio – are only subject to strict transparency requirements (AI Act, Article 50). The legal classification of general-purpose AI models (AI Act, Chapter 5), on the other hand, depend on the specific context in which they are used. It should be noted here that it has already become clear that important standards and instruments for practical implementation are lacking. Many medical device manufacturers are confronted with uncertain distinctions between the AI Regulation and the Medical Devices Regulation 2017/745/EU (MDR). This is one of the reasons why the EU Commission has proposed a ‘Digital Omnibus Regulation’ on 19 November 2025.1
The proposal acknowledges the challenge that the delay of standards and other guidelines pose for the implementation of the AI Act. The Commission proposes to align the start of application of the rules for high-risk AI systems with the actual availability of uniform standards, common specifications, or guidelines from the Commission. The timeline for the high-risk AI rules is therefore aligned to the availability of standards and other guidelines. However, this flexibility has an end date: the rules for high-risk AI in sensitive areas like employment and law enforcement (Annex III) will in any case apply from 2 December 2027. The rules for high-risk AI embedded in products like medical devices (Annex I) will apply from 2 August 2028 at the latest.2
Practical applicability
The AI Act is a horizontal legal act. This means that the regulations apply regardless of the specific field of application (eg, automotive, infrastructure, healthcare) in which the AI system, as defined in in Article 3(1) of the AI Act, is used.
However, certain AI systems are completely exempt from the AI Regulation, regardless of their risk. These include, for example, AI systems that are developed and put into operation solely for scientific research and development, as specified in Article 2(6) of the AI Regulation.
Territorial applicability
The territorial scope of the AI Regulation is determined by the so-called market location principle. Accordingly, the AI Regulation applies if the high-risk AI system is put into service in the EU or if the high-risk AI is placed on the market in the EU. This also applies if the AI system is used outside the EU but its output is used within the EU.
High-risk AI-System
The mere fact that a medical device uses machine learning methods does not necessarily mean that it must meet the requirements for high-risk AI systems (AI Regulation, Article 8 ff). This is only the case if the requirements of Article 6 of the AI Act are fully met. The intended purpose of an AI system in accordance with Article 3(12) of the AI Act is decisive for classification.
The AI Act distinguishes between high-risk AI systems, which fall under Annex I and Annex III of the AI Act.
Annex III lists areas of application in which the sole operation of AI systems results in them being classified as high-risk AI systems.
Annex I on the other hand, lists products, covered by the Union harmonisation legislation, such as the MDR, which is mentioned in paragraph 11. This annex does not stand alone but must always be read in connection with Article 6(1) of the AI Act. As a result, AI systems that are themselves medical devices (eg, software) or safety components of a medical device and must undergo a third-party conformity assessment are considered high-risk AI systems.
For the (high-risk) AI systems referred to in Article 6(1) of the AI Act, Annex I, Section B of the AI Regulation, Article 2(2) of the AI Act contains a restriction on the objective scope of application.
If a product falls under the harmonisation regulations of Annex I Section B, this means that in the end essential requirements of the AI Act do not have to be complied with.
Operator AI-System
Recital 8 defines ‘Operator’ as a provider, product manufacturer, deployer, authorised representative, importer or distributor. They have different responsibilities, with the main burden falling on the provider and deployer (for high-risk AI systems see Articles 16–27 of the AI Act).
Hospitals, for example, using high-risk AI medical devices in their operations will probably mainly be subject to the respective obligations of the AI Regulation as deployer. This also includes the obligation to inform staff about the requirements of human oversight (AI Act, Article 14) and to ensure compliance with these requirements. A violation of obligations can mean the ruin for the operator, since substantial fines may be imposed, see Article 99(3) and (4) of the AI Act.
Liability
A separate liability provision has not yet been included in the AI Act. Liability is currently determined by national law. However, the new Product Liability Directive of 18 November 2024 (Directive (EU) 2024/2853) must be implemented in Germany on 9 December 2026. Providers of AI systems are then considered manufacturers within the scope of the Product Liability Directive. Since AI models and systems will be categorised as products, medical device manufacturers must pay particular attention to how they update and adapt AI systems and software. Any significant changes can affect liability. Therefore, these changes should be carefully documented to minimise liability risks.
Implementation of the AI Regulation into German law
To implement the provisions of the AI Regulation, the Federal Ministry for Digital Transformation and Government Modernisation (Bundesministerium für Digitales und Staatsmodernisierung, BMDS) now presented a draft bill for a law implementing the AI Regulation, Article 1 of which forms the AI Market Surveillance and Innovation Promotion Act (KI-Marktüberwachungs- und Innovationsförderungsgesetz, KI-MIG).3 The draft bill particularly emphasises that the authorities, that are already the competent market surveillance and notifying authorities listed in the fully harmonised Annex I Section A of the AI Act, will also become the competent authorities relating to the AI Act.
For AI-based medical devices, this means that in Germany the Federal Institute for Drugs and Medical Devices (Bundesinstitut für Arzneimittel und Medizinprodukte, BfArM) stays responsible for market surveillance on federal level. On state level, the market surveillance authorities already responsible for medical device surveillance, continue to stay in charge.
Also, the Central Authority of the States for Health Protection in the Field of Medicinal Products and Medical Devices (Zentralstelle der Länder für Gesundheitsschutz bei Arzneimitteln und Medizinprodukten, ZLG), responsible for the designation and surveillance of notified bodies in Germany, will remain in authority for AI-based medical devices.
In case a supervising authority is not yet designated, the Federal Network Agency (Bundesnetzagentur, BNetzA) will be the competent market surveillance authority and notifying authority, section 2(1) and (2) of the KI-MIG (draft).
In addition, an AI coordination and competence centre (Koordinierungs- und Kompetenzzentrum KI-VO, KoKIVO) is to be set up to address the limited availability of AI specialists, to pool resources and AI expertise and provide the relevant authorities with technical knowledge.
Furthermore, an independent AI market surveillance chamber (Unabhängige KI-Marktüberwachungskammer, UKIM) will be established at the Federal Network Agency to carry out market surveillance. The chamber shall be established to supervise the use of high-risk AI systems in areas relevant to fundamental rights.
The draft bill proposes that fines will be imposed in accordance with the German Administrative Offenses Act (Ordnungswidrigkeitengesetz, OwiG). Some experts highlight, that this also makes general supervisory duties relevant, since the German Administrative Offenses Act allows the imposition of personal fines for breaches of supervisory duties. This is indeed likely to be relevant for the board of directors and management.
Possible changes due to the draft bill by the EU Commission
On 16 December 2025, the EU Commission published a proposal to simplify the MDR.4 It contains numerous adjustments that are intended to reduce the administrative burden, eliminate unnecessary regulatory standards, and ensure improved coordination within the EU to promote innovation and digitalisation.
For medical devices, the EU Commission plans for a fundamental readjustment of the interaction between the AI Act and the regulation of medical devices. According to this proposal, medical devices will only fall within the scope of the AI Act to a very limited extent. This is because the MDR is to be moved from Section A to Section B of Annex I of the AI Act. In this case, Article 2(2) of the AI Act states that the provisions of the AI Act are only applicable to a very limited extent, because legal acts referred to in Annex I Section B are already subject to separate detailed regulation at national level, and can therefore only be partially specified by the AI Act. This is primarily intended to prevent double regulation and (contradictory) overlaps between the MDR and the AI Act. As a result, essentially only the MDR will apply to most medical devices.
Experts agree that the Commission's proposal represents a milestone for the medical device industry. The reduction of regulatory requirements and administrative obligations are crucial incentives to revitalise the industry.
The legislative proposal for the amendments of the AI-Act and simplification of the MDR (and IVDR) has now been submitted to the European Parliament and the Council of the European Union. It remains unclear whether and when the Commission's proposals will be adopted and how the (time) requirements of the Digital Omnibus will be considered.
Due to these developments at the European level, the AI-MIG is likely to be on hold for the time being.
Notes
- Available at https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal.
- See https://digital-strategy.ec.europa.eu/en/faqs/digital-package.
- See https://bmds.bund.de/service/gesetzgebungsverfahren/gesetz-zur-durchfuehrung-der-ki-verordnung.
- Available at https://health.ec.europa.eu/document/download/25e7ea7c-cab3-40cf-86d9-d11f5e7744d8_en?filename=md_com_2025-1023_act_en.pdf.