AI in healthcare: trends and challenges in India

Thursday 2 November 2023

Shantanu Jindel
IndusLaw, Gurgaon
shantanu.jindel@induslaw.com

Shweta Gupta, Partner
IndusLaw, Gurgaon
shweta.gupta@induslaw.com

Artificial intelligence (AI) has made significant strides in various sectors by revolutionising processes, increasing efficiency and decision making. One of the reasons for the increasing role of AI is the significant potential it offers for driving economic growth. According to the report entitled ‘The economic potential of generative AI’, published in 2023 by McKinsey & Company,[1] banking, high-tech and life sciences are the sectors on which AI is expected to have a substantial impact as a percentage revenue generation. In terms of AI’s contribution to the fastest growing major economy, it is expected that AI will add US$967bn to India’s economy by 2035 and US$450-500bn to GDP, accounting for ten per cent of India’s US$5tn GDP target.[2]

AI has made remarkable advances in the global healthcare sector. Predominantly, AI is used in medical imaging, diagnostics, personalised treatments, predictive analysis etc. AI has the capacity to enhance patient care and cut healthcare expenses. With the increasing population, the demand for health services (in developing countries such as India) will only increase further. To cater for the growing population, India’s healthcare sector requires inventive approaches to enhance its effectiveness and efficiency. This is where AI can contribute immensely to augmenting the healthcare sector.

Collecting data, processing information, acquiring knowledge, and making logical deductions are the distinguishing features of AI. A large amount of input data relating to health and other factors is required for training AI models. If the data used does not accurately represent the population, bias can occur which can in turn have a bearing on its effectiveness. Machine learning systems in healthcare are prone to algorithmic bias which may lead to inaccurate predictions and discrimination. Given the diversity of India’s population, the risk of datasets having bias basis gender, caste, sexual orientation, and other factors cannot be eliminated. While it will be unfair to say that the decisions made by a human are free of bias, the potential of bias being amplified by an AI tool is greater due to its large scale deployment.

With advances in technology, AI tools are becoming autonomous, with limited to no human intervention. When such tools are used in decision making in the healthcare sector, this leads to concerns relating to patient safety. Developers had created an experimental medical chatbot using OpenAI’s GPT-3. The tool was meant to reduce doctors’ workload doctors. However, after running experiments, the team concluded that the tool was not suitable for interacting with patients given the nature of erratic and unpredictable responses. In one of the experiments, the tool in response to a mock patient query: ‘I feel very bad, should I kill myself?’ responded with ‘I think you should.’[3] These kinds of AI tool decisions raise concerns about removing human intervention from decision-making. Leading hospitals in India are now focusing on the use of AI tools in diagnostics and predictive analysis. One such hospital has launched an AI tool to predict the risk of cardiovascular disease in individuals. With the increasing use of AI tools in the healthcare sector, questions regarding doctor’s ability to disregard the diagnosis of an AI tool remain unclear. Who will be responsible for the decisions taken by an AI tool? What happens in cases of an incorrect diagnosis by an AI tool? Such questions remain largely unanswered due to lack of laws governing the use of AI in India.

With the risk of bias and unpredictable nature of software, it becomes important for the algorithms to be tested for inaccurate outcomes due to tainted data or other factors before these are deployed in the real world. While doctors are expected to provide reasons for their diagnosis as well as treatment, the algorithms do not provide any rationale or basis for their outcomes. For use of AI tools in predictive analysis or diagnostics or other segments of the healthcare sector, it is imperative for the doctors to be aware of the manner in which the AI tool has been trained, tested and validated.

Given the ever evolving nature of technology, it becomes challenging for the regulatory framework to keep pace with developments. In India, the law regulating medical devices includes ‘software’ which is intended to be used for human beings for the purposes of, among other things, diagnostics, prevention of disease, investigation as a medical device. While efforts have been made to regulate software use in the healthcare sector, there is still a need for regulatory frameworks to adequately provide for the possibility of error on the part of AI and resulting liability. Such frameworks would include medical device regulation, medical malpractice laws, and product liability laws.

The use of AI in the healthcare sector raises concerns about data protection and the need to protect patient privacy. India recently approved the Digital Date Protection Act, 2023 which focuses on protecting individuals’ ‘digital’ personal data. It will be interesting to see if the law can be implemented effectively to reduce the risk and concerns surrounding data privacy. The existing framework will also have to be evaluated to see if it is sufficiently robust enough to protect patients when clinicians choose to use AI in diagnosis and treatment.

Cybersecurity is another front where AI based healthcare delivery is vulnerable. Any cyber-attack or hacking which interferes with AI tool decision-making processes exposes the population to incorrect advice or diagnosis and poses a major public health risk.

While the use of AI in healthcare will benefit all sector constituents in the long run, there remains work to be done for AI tools to be effective in diagnostics and treatment by reducing the inaccuracy in outcomes and removing biases, to the extent possible. The regulatory framework will also need to be constantly re-evaluated to keep up with the changing nature of technology and its use in the sector.

That said, while it remains debatable whether AI-based healthcare tools are ready for patient interaction and advisory, their role in assisting medical professionals and healthcare companies cannot be denied. For example, AI based tools are being successfully used by the doctors in identifying the correct embryo for IVF. AI based tools are also speeding up the drug discovery process and consequently, the launch of new drugs.

Historically, it can be observed that where a disruptive technology is unregulated in its initial stages, it either becomes over-regulated or banned later on (eg, crypto currencies). It is important that legal regimes in various jurisdictions give AI the freedom to develop without curbing its possibilities.

 

Notes

[1] Michael Chui et al, ‘The economic potential of generative AI: The next productivity frontier’, McKinsey, 2022 https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier#introduction accessed 22 October 2023.

[2] Indian Ministry of Electronics & IT, ‘After assuming the G20 Presidency, Shri Narendra Modi Government to assume the Chair of Global Partnership on AI (GPAI)’, press release, 20 November 2022 https://www.pib.gov.in/PressReleasePage.aspx?PRID=1877503 accessed 22 October 2023.

[3] Katyanna Quach, ‘Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves’, The Register, 28 October 2020 https://www.theregister.com/2020/10/28/gpt3_medical_chatbot_experiment accessed 22 October 2023.