Obstacles to healthcare AI: legal issues relating to the increasing use of AI in healthcare and medical technologies

Thursday 2 November 2023

Ricardo Costa Macedo

Caiado Guerreiro, Sociedade de Advogados, Lisbon

rmacedo@caiadoguerreiro.com

Ricardo Rodrigues Lope

Caiado Guerreiro, Sociedade de Advogados, Lisbon

rlopes@caiadoguerreiro.com

Georgii Fokin

Caiado Guerreiro, Sociedade de Advogados, Lisbon

gfokin@caiadoguerreiro.com

The impact of artificial intelligence (AI) in the healthcare market are becoming increasingly more pronounced, indicating a turning point in modern medical practice. For example, in areas such as diagnosis, some machine learning tools have been shown to be more accurate in identifying pancreatic cysts harbouring cancer or at risk of cancer, than conventional clinical and imaging criteria alone, demonstrating the potential for AI to reduce costs and current rates of morbidity.[1] AI has also shown promise in health information management, robotic surgery, and the replacement of face-to-face care with virtual assistance.[2] However, despite these impressive achievements, there are a number of uncertainties and risks to be addressed before the technology is fully integrated into the healthcare market. This article addresses both current and longer-term issues of AI in the healthcare sector.

Current issues

Data concerns

One of the main issues currently facing AI stems from the limitations of uploading data into its algorithm for machine learning. To be more specific, there is a risk of data inputs being very selective in the information they feed to an AI, manifesting in a very narrow understanding of topics which are potentially more complicated and require further consideration. This problem is more than just a theoretical concern, as there have already been examples of AI arriving at incorrect conclusions because of the data received. Caruana recalls how a medical AI trained to provide a preliminary assessment of pneumonia patients determined from very selective data that asthma patients have a lower probability of death than the rest of the population.[3] The AI failed to consider the context in which this data was gathered, overlooking the extra care that was being shown to asthma patients. This illustrates how some AI currently lacks the awareness that human doctors possess, suggesting that the information being fed to their algorithm requires extensive monitoring to avoid legal consequences for health professionals and organisations. In fact, AI is a machine learning process, and the quality of AI will always depend on the quality of the data used to train it.

Privacy concerns

There are other issues with having to input data in order to assist AI machine learning, such as legal risks associated with giving an algorithm access to patients’ private health information. This is problematic as it not only exposes personal information to potential security threats associated with online data but can also violate the personal autonomy of patients. For example, there have been references in the past to public-private partnerships for implementing machine learning that have resulted in poor protection of privacy.[4] From an ethical perspective, gathering private data without patient consent infringes on their personal autonomy. One should not forget that the data being processed will potentially include biometric, health, and genetic data, all of which are considered special categories of data, subject to a higher level of data protection of the subjects, including limitations on the grounds for processing such data. In this regard, the General Data Protection Regulation (GDPR) also foresees procedures and principles to be observed in respect of the processing of data for scientific research purposes, such as applying adequate safeguards such as pseudonymisation of the used data. From a legal perspective, there should be safeguards to ensure that the data processed for training AI models respects patients’ privacy rights, including assuring proper data subjects’ consent when required and applying adequate safeguards.

Longer-term issues

Liability concerns associated with autonomous AI

One of the most significant concerns of AI integration into the healthcare market in the longer-term is a question of liability. When a human doctor misdiagnosis a patient, makes a surgical error, or prescribes incorrect medicine, they can be liable for the injuries incurred by the patient. In such circumstances, medical negligence law would apply. However, courts are yet to consider liability for medical negligence in the case of autonomous AI. With machine learning, the ability of AI to operate as a medical practitioner is becoming increasingly more of a reality. In its current state it is not clear how civil liability should apply to AI and who would be liable. This is because of the ongoing debate regarding what type of liability should apply – human or product liability. Given the technology is completing tasks typically performed by human doctors, one could argue that human liability should apply and that it would be the doctor (or hospital) using the AI that should be liable. The issue with this is that AI is not a legal person, and cannot, therefore, be directly liable for acts of negligence.

On the other hand, product liability is not an accurate categorisation either, as the autonomous decision-making of the product can blur the link between the AI’s manufacturer and the product itself. Indeed, it would not always be clear who would be responsible for the ‘defective’ product, in particular, whether it would be the legal person who developed the algorithm, or the legal person who provided the data or trained the data, as different entities may have contributed to the end result. Also, an undesired outcome from the use of the AI may not be attributable to the AI as such but to how it was used, for instance if it was used in situations where it would not be adequate. This illustrates how there is currently no single liability approach for AI, suggesting that modern concepts of liability should evolve to encompass autonomous technology and ensure accountability. In this respect, the European Commission has already issued a proposal of Directive on adapting non-contractual civil liability rules to artificial intelligence.[5]

Conclusion

The effects of AI on healthcare and medical technology have been largely positive and has an extraordinary potential to be realised in future. However, despite the overwhelming benefits, there are still significant issues which need to be addressed before fully integrating the technology into the healthcare market. Given the novelty of AI, and the new challenges posed by the same, specific legislation will need to be enacted to address adequately the current concerns and longer-term issues associated with the technology. However, this does not mean that until then, we are not able to apply the existing legal framework to AI.


Notes

[1] Simeon Springer and all, ‘A Multimodality Test to Guide the Management of Patients with a Pancreatic Cyst’ (2019) Science Magazine 11(501), 2.

[2] Brent Mittelstadt, ‘The Impact of Artificial Intelligence on the Doctor-Patient Relationship’, (2021) Strasbourg: Council of Europe Publishing, 8.

[3] Rich Caruana and all, ‘Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-day Readmission’ (2015) Association for Computing Machinery, 1721-1730.

[4] Blake Murdoch, ‘Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era’ (2021) 122 BMC Med Ethics 22, 2.

[5] Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), 28 September 2022.