Slouching towards oversight - artificial intelligence and the law

Tuesday 20 June 2023

Shruti Gupta

Ronin Legal, Bangalore

shruti@roninlegal.in

Anushka Iyer

Ronin Legal, Bangalore

anushka@roninlegal.in

Shantanu Mukherjee

Ronin Legal, Bangalore

shantanu@roninlegal.in

Introduction

The past several months have witnessed a frenzy of activity in the artificial intelligence (AI) space, sparked by the explosive release of OpenAI’s ChatGPT in November 2022. AI applications have multiplied exponentially in only a few months, instances of use across sectors including healthcare, drug development, marketing, finance, coding and the arts. The momentum behind AI innovation is exemplified by substantial investments in AI companies, with the first quarter of 2023 alone witnessing approximately $1.7bn in funding, according to data from PitchBook.

Legal and ethical concerns

The widespread adoption and development of AI technology has triggered significant legal and ethical concerns, including job displacement, biased outcomes, data privacy violations, and threats to competition, intellectual property (IP) and contract law. Notably, a statement endorsed by over 350 AI researchers and industry leaders, including Bill Gates, OpenAI Chief Executive Officer (CEO) Sam Altman and Google DeepMind CEO Demis Hassabis, emphasises the potential ‘existential threat’ AI poses to humanity. These concerns necessitate the implementation of robust legal and ethical frameworks to address AI’s risks effectively.

Data privacy and informed consent

Language models such as ChatGPT, Bard and Midjourney are trained on extensive amounts of publicly available data, thereby raising concerns related to data theft, privacy breaches and informed consent. Several countries, including China, Iran, Italy and Russia have banned the use of ChatGPT due to data privacy concerns. Italy recently lifted this ban following OpenAI’s compliance with its privacy regulations.

The Center of AI and Digital Policy filed a complaint with the Federal Trade Commission (FTC), alleging that OpenAI violated the FTC Act, and released GPT-4 despite being aware of its risks, including those pertaining to data privacy.

Algorithmic transparency and bias

Certain clinical decision support software (CDS) tools, designed to assist clinicians in diagnosis and treatment, exhibit significant disparities in accuracy across different regions, with developers unable to provide satisfactory explanations for such disparities. This lack of transparency undermines the principles of accountability and liability under law.

Additionally, AI systems can inadvertently perpetuate human biases from underlying training datasets, leading to gender-specific biases such as CDS tools failing to detect kidney injuries in women, or racial biases manifesting as AI-generated outputs suggesting that individuals with lighter skin are more susceptible to skin cancer than those with darker skin.

IP ownership and antitrust

Numerous artists have initiated a lawsuit against Stability AI, Midjourney and DeviantArt for using copyrighted artworks to train AI models without authorisation. Getty Images has filed a suit against Stability AI for unauthorised use of Getty-owned stock images. The Center for Artistic Enquiry and Reporting has published an open letter denoting these unauthorised uses of copyrighted images as the ‘greatest art heist in history’.

Moreover, the images generated by Stability AI included a modified version of Getty’s watermark, violating both US trademark laws and misrepresenting the relationship between Stability AI and Getty, therefore infringing upon antitrust laws.

Several software developers have also filed suit against GitHub, Microsoft, and OpenAI, alleging that the scraping of public source code from GitHub to create OpenAI’s Codex and GitHub’s Copilot infringed on copyright, contract, privacy, and business laws.

These instances raise several IP concerns, including debates over whether AI tools infringe on IP and whether they can be legally recognised as inventors or creators under law, with most jurisdictions holding that they cannot. Moreover, it also raises competition law issues, particularly when language models such as ChatGPT are trained on and reproduce information from restricted-access websites, potentially redirecting traffic and subscribers away from those websites.

The regulation of AI

European Union

The European Union (EU) has made significant progress towards enacting comprehensive AI legislation with the EU AI Act, which recently received approval from EU’s parliamentary committee.

Risk based classification

This Act categorises AI systems based on their perceived level of risk and imposes corresponding obligations. High-risk AI systems are subject to extensive obligations, while low-risk systems face limited obligations, such as self-regulation or transparency measures. Notably, under the Act, companies deploying generative AI (GAI) systems are required to disclose copyrighted material used in the training process. Further AI systems used for scientific or research purposes, are exempt from the Act.

AI sandboxes

The EU AI Act also establishes ‘AI sandboxes’, which provide a controlled environment for developing, testing, and validating AI systems, aiming to encourage innovation by minimising liabilities and obligations.

Liability

The EU has introduced a draft of the AI Liability Directive and the revised Product Liability Directive, which aim to complement the EU AI Act, and implement a sweeping and restrictive AI governance framework.

The revised Product Liability Directive expands the scope of potentially accountable parties to include service providers and online marketplaces, and allows consumers to seek compensation from representatives of non-EU manufacturers. It introduces a shift in the burden of proof, requiring claimants to demonstrate a likelihood of damage (as opposed to actual damage) caused by the AI product. The AI Liability Directive aims to adopt comparable standards, particularly regarding the shift in the burden of proof, to determine liability arising from the use of AI systems.

United Kingdom

The UK government has published a White Paper titled ‘A pro-innovation approach to AI regulation’ to guide the regulation of AI systems. The White Paper recommends regulating the output generated by AI systems rather than the systems themselves. It also suggests the establishment of regulatory sandboxes, technical standards, and periodic compliance assessments throughout an AI system’s life-cycle. However, in contrast with the EU, the White Paper remains silent on the issue of liability, adopting a wait-and-see approach.

United States

Similar to its data privacy framework, AI regulation in the United States is fragmented across industries, as well as federal and state levels. Various regulatory bodies have released guidelines and recommendations to address specific aspects of AI regulation.

FDA action plan

The Food and Drug Administration (FDA) has developed an Action Plan, which includes recommendations for Good Machine Learning Practices and guidelines for regulating CDS that use AI in medical contexts.

AI Bill of Rights

The White House has published the ‘AI Bill of Rights’, which outlines key principles for AI regulation, emphasising the importance of building safe and effective systems with risk identification and considering human alternatives as fall-backs.

State AI regulations

Several US states, including California, New Jersey, New York and Rhode Island, have proposed legislation to regulate the use of automated decision-making systems.

Massachusetts recently released a draft regulation governing GAI systems. It requires registration, implementation of security measures, risk assessment and compliance with copyright laws.

Connecticut has released a similar comprehensive framework for AI governance that mandates responsible use of AI, registration and compliance requirements for procurement, development and use of AI systems.

FTC guidelines and warning statement

The Federal Trade Commission (FTC) has issued guidelines identifying potential risks and highlighting principles for designing human-centric AI systems.

In April 2023, four US regulators, including the FTC and the Justice Department, issued a joint statement affirming their authority under existing law to take action against companies whose AI products cause harm to users.

Use of AI in facial recognition and biometric systems

The FTC has issued a policy statement addressing the misuse and potential harm of biometric information systems and has laid out criteria for evaluating the harm caused by such systems. The Facial Recognition and Biometric Technology Moratorium Act, introduced in the Senate, calls for a temporary ban on the use of facial recognition and biometric technologies by federal entities.

Singapore

Singapore has introduced the Model Artificial Intelligence Governance Framework, which recommends a risk-based approach. The framework offers practical measures to address the risks associated with AI.

The Personal Data Protection Commission, in collaboration with the Infocomm Media Development Authority, has also launched AI Verify, an AI compliance tool (currently in beta testing) to enable companies to test and demonstrate the safety and reliability of their AI systems.

China

The Cyberspace Administration of China has introduced a draft legislation called the ‘Management Measures for GAI Services’. The draft legislation emphasises adherence to socialist norms, security assessments and compliance with existing laws. Notably, the obligations imposed by the draft law are provider-centric, with providers being responsible for preventing misuse and discouraging excessive reliance on AI systems by users.

Australia

In November 2019, Australia introduced a national voluntary AI Ethics Framework identifying eight principles for development of AI governance. More recently, in June 2023, Australia published a discussion paper on Safe and Responsible AI. While the paper omits addressing certain issues, including impact on labour markets, national security and IP, it does identify gaps in governance and proposes mechanisms for AI development.

Japan

Japan has currently adopted a soft-law approach to AI regulation. ‘Japan’s National Strategy in the New Era of AI’, a working White Paper on AI regulation, which was first published in 2019, is annually updated to reflect the evolving AI policy landscape. In April 2023, the Japanese government established a Strategy Council as a central command centre responsible for formulating national strategies on AI and providing primary policy direction, including on crucial issues like data protection and copyright.

India

India currently lacks any specific legislation pertaining to AI. However, the government agency NITI Aayog, in its efforts to address the challenges and ethical concerns surrounding AI adoption, has published two papers: the ‘National Strategy for AI’ (NSAI) and ‘Responsible AI’. The NSAI, among other things, proposes the establishment of AI research centres, updates to the IP framework, and the introduction of AI/ML courses in universities. ‘Responsible AI’ is a working paper, exploring the ethical implications of using facial recognition AI systems, which addresses bias, privacy, security risks and transparency, and recommends the adoption of legal and technological measures consistent with the principles of responsible AI.

Ethical guidelines for application of AI in biomedical research and healthcare

The Department of Health Research and the Indian Council of Medical Research have released ethical guidelines for use of AI in medicine, recommending an ethics review process, governance of AI use, requirements for valid informed consent, and guiding principles for stakeholders involved in biomedical research and healthcare.

Information Technology Act, 2000

Although India’s Information Technology Act, 2000 (IT Act) does not specifically address AI, certain provisions can be relevant to certain AI-related issues. For example, Section 43A holds corporate bodies accountable for negligence in maintaining reasonable security practices when dealing with sensitive personal data, including payment of compensation. One might also have recourse under the Act if a person is the subject of an AI-generated deepfake that can be said to be obscene.

The IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) provide a protective framework for handling sensitive personal information, including health data and biometrics, serving as a safeguard against privacy concerns for AI systems processing such data.

Review of the IP Rights Regime in India, 2021

The Parliament Standing Committee on Commerce’s 2021 report on the review of the IP rights regime in India recommends the inclusion of a separate category of rights for AI-generated works and AI solutions within the existing IP framework. This report was quoted by the Delhi High Court in a recent judgment, in which it noted the need to reassess provisions in Indian patent legislation in light of emerging technologies.

Conclusion

While the IT Act and Rules in India provide a loose regulatory framework for governing the use of AI systems, certain legal concerns, such as transparency, algorithmic bias and IP rights, remain unaddressed.

At a recent conference involving IT sector stakeholders, Indian government officials have indicated that the forthcoming Digital India Act (DIA) would contain requirements for AI governance and safeguards to address user harm, and antitrust measures to mitigate the market dominance of large technology companies. The potential impact of the DIA on the regulatory landscape not only for AI but also for other technologies in India remains to be seen.