Building AI strategies according to India’s new data framework

Tuesday 10 March 2026

Naqeeb Ahmed Kazia
CMS INDUSLAW, Bengaluru
naqeeb.ahmed@cms-induslaw.com

Purushotham Kittane
CMS INDUSLAW, Bengaluru
purushotham.kittane@cms-induslaw.com

Introduction 

By May 2027, India’s new data protection law, which is being implemented in three phases, will be fully in force. The Digital Personal Data Protection Act 2023 (the ‘DPDP Act’) is set to fundamentally change the privacy and data protection landscape in India.[1] 

The DPDP Act is also set to help shape India’s artificial intelligence (AI) landscape, in addition to India’s existing copyright and information technology laws (the Indian government does not foresee the creation of a standalone AI law).[2] AI systems process large volumes of personal data, often by design. As a result, the providers of AI systems will inevitably be subject to the obligations and restrictions outlined in the DPDP Act. Privacy requirements that are embedded within AI systems will no longer be a choice, but a statutory obligation.

Global organisations that develop, engage with or use AI systems are not new to navigating the different data protection and privacy frameworks in various jurisdictions. In this regard, there are many recurring themes within the Indian DPDP Act that are similar to other major pieces of privacy legislation, such as the European Union’s Regulation 2016/679, otherwise known as the General Data Protection Regulation (GDPR)[3] and the UK’s General Data Protection Regulation.[4] 

However, the DPDP Act also offers specific nuances that differ from the many common themes found across these globally recognised pieces of data protection legislation, such as providing a unique public data exemption, the provision of specific prescribed legitimate uses as a legal basis for processing, the implementation of measures to ‘prevent’ a personal data breach, the holding of all categories of personal data in equal standing and, even, the imposition of certain ‘duties’ on data subjects. Understanding the nuances that are unique to the DPDP Act could be the difference for global organisations looking to leverage the legal framework in order to deploy AI systems effectively rather than being restricted by it.

Is consent the way forward?

Traditional AI training relies on scraping and amassing vast datasets. The use of AI systems may also include the input of personal data by the user. The DPDP Act provides that a data controller (termed a data fiduciary in the DPDP Act) is obligated to obtain ‘free, specific, informed, unconditional and unambiguous’ consent from the data subject (termed a data principal in the DPDP Act). Once these consent obligations become effective in May 2027, will this mean that organisations have to provide lengthy privacy notices and request affirmative actions from data subjects who are granting their consent to process their personal data?

The answer is yes if organisations have to rely on consent as the legal basis for processing personal data, unless they can rely on certain prescribed legitimate uses, as stipulated under the DPDP Act. Organisations do not have to rely on consent if legitimate uses apply, such as where the data is provided voluntarily by the data subject or where a niche purpose exists, such as regarding an employment relationship, can be relied on as the legal basis for such processing. Unlike the GDPR’s requirement concerning the establishment of legitimate interests, which are driven by practice rather than the letter of law, the DPDP Act provides very specific legitimate uses. For instance, if the data subject provides their personal data voluntarily for a very specific purpose, such as answering a prompt, consent need not be obtained by the AI system for responding to the prompt. Such personal data can be processed by the AI system and deployed in a localised manner. Another legitimate use that can be relied on by a data controller is the processing of personal data for the purpose of employment. This opens up many use cases for employers to engage and use AI systems for human resource management and/or to protect themselves from loss and liability, etc.

Lastly, the DPDP Act does not apply to personal data that is made publicly available either by the data subject or by another person under law. This public data exemption is broad and applicable to all types of personal data and is not limited to sensitive personal data that is manifestly made public under the GDPR. As a result, datasets, which are scraped from public webpages or public social media profiles, for instance, that contain personal data and are being used for AI training will be beyond the purview of the DPDP Act. The use of such unrestricted datasets may provide a competitive edge for training AI systems within the Indian ecosystem.

Organisations relying on a strategic combination of these various legal bases for processing personal data, such as consent, voluntary disclosure and employment, as well as availing themselves of the public data exemption, will have the opportunity to enhance their use and deployment of AI systems.

The machine unlearning challenge

The challenge of removing personal data from the training set of an AI system or ‘unlearning’ it from a model (machine unlearning) is not uncommon, particularly when businesses are faced with the task of fulfilling right to erasure requests. Commentary on the application of the GDPR in this regard recommends training AI systems only on anonymised or impersonal data or removing personal data and retraining the system, as the cleanest approaches.[5] A similar challenge can also arise when an AI system hallucinates personal data or selectively processes it in a probabilistic black box manner.

Under the DPDP Act, however, this dilemma arises only for personal data that is processed based on consent. Unlike the GDPR, it only applies to personal data provided by the data subjects themselves. The DPDP Act provides for a right to withdraw from data processing instead of a blanket right to erasure. This distinction narrows the challenge for businesses down to identifying the personal data that is being processed as a result of consent being given by the data subject. As such, organisations can address the issue with the strategic use of personal data that is processed based on the available legitimate uses or the use of anonymised data or public data, where possible.

The risk facing businesses related to the abuse of personal data in the context of AI

Liability can arise in various forms during the deployment or use of AI systems. Recently, for instance, the Indian Ministry of Electronics and Information Technology (MeitY) issued a notice to X (formerly known as Twitter) requiring it to act within 72 hours to prevent the circulation of obscene, nude, indecent and sexually explicit content on its platform.[6] This notice was triggered by users of Grok AI generating sexually explicit imagery from photos of women, which could classify as personal data. The notice, which was issued pursuant to India’s existing IT laws, was not issued to xAI, the developer of Grok AI, but to X, the operator of the social media platform on which Grok AI was being deployed. This is not an isolated issue and has impacted other AI systems too, such as Gemini and ChatGPT.[7] Such concerns have reportedly come to the attention of the Malaysian and French governments as well.[8] Organisations that don’t own AI systems but have deployed them or merely use them should heed the relevant local harmful content laws in order to be well prepared for such situations.

The Indian government does not contemplate the need for a standalone AI law that, inter alia, tackles abuse or harm caused by AI systems. Where unauthorised access to personal data or a privacy violation has occurred, the DPDP Act may also be invoked. Data controllers may be liable for fines of up to INR 250 Crores ($28m) for failing to put in place reasonable security safeguards to prevent a personal data breach. By contrast, the GDPR does not impose a result-oriented obligation (ie, to prevent a breach) but rather a means-oriented obligation (ie, to implement certain technical and organisational measures). Hence, the misuse of personal data through AI systems, as exemplified above, should be tackled differently.

Another important aspect in regard to the potential for the abuse of personal data via AI systems that is unique to the DPDP Act pertains to the obligation placed on the data controller to ensure that the data held is complete, accurate and consistent when used to make a decision that affects a data subject. The approach being taken in India is different to the GDPR, which expressly mandates accuracy but not the completeness and consistency of personal data. Under the DPDP Act, if an employer engages an AI system to assist with an employee assessment and the AI characteristically ‘completes’ an incomplete dataset of an employee during the process, there may be a higher burden on the employer to ensure that AI hallucinations do not contribute to the decision-making process about employees.

Navigating AI resilience under the DPDP Act

Risk mitigation as an exercise likely needs to be reconsidered by organisations in order to align such processes with the particulars contained within the DPDP Act when deploying or using AI systems. Generic safeguards, such as differential privacy (the input of ‘noise’ to make a dataset impersonal), on-device training using localised processing and triggered human-in-the-loop checks, may be put in place.

That being said, proactive measures to address the unique concerns foreseen in regard to AI in an Indian context may also be architecturally embedded into an organisation’s processes relating to personal data. For example, guardrails for AI systems have become commonplace in order to screen and block harmful content that is either input into an AI model or generated by such models. These guardrails have to be constructed while keeping India-specific requirements in mind, wherein data controllers are obligated to report a personal data breach within 72 hours of becoming aware of the breach. Privacy notices that are compliant with the DPDP Act should be provided at various touchpoints during the use of an AI system at stages where a user is likely to input personal data that may be used for further AI training. Attention must also be given to jurisdictions that may be blacklisted for the purposes of cross-border data transfers by the Indian government under the DPDP Act. Personal data processing by AI servers located in such jurisdictions must be avoided.

The DPDP Act can also be positively leveraged by organisations, as follows:

In a first of its kind, the DPDP Act creates the role of ‘consent managers’, who could act as intermediaries for data controllers, allowing users to ‘opt-in’ their data for use by AI systems and in effect acting as singular point data funnels. These consent managers are intended to act as the interfacing entity between data controllers and data subjects for all things related to consent and are required to service them in a fiduciary capacity. The rules issued under the DPDP Act,[9] inter alia, provide for the strict regulation of consent managers and, hence, the industry is evolving in this regard. The DPDP Act treats all personal data as falling within the same category and does not differentiate sensitive personal data. This simplifies training pipelines for AI systems, but also requires high-level security measures to be implemented across the entire dataset. The DPDP Act also imposes duties on data subjects themselves and subjects them to penalties for violations. These duties include not suppressing material information, not impersonating another person and complying with law when providing their personal data.

The key takeaway

The strategic integration of AI systems into business processes is more of a norm than an exception in the world we live in. Multinational corporations can achieve true global readiness by sensitising themselves to important local laws, such as the DPDP Act. Understanding the nuances of the DPDP Act can act as a strategic foundation for responsible and scalable AI deployment, thereby transforming regulatory burdens into a competitive advantage in an increasingly data-sovereign world.


[1] The Digital Personal Data Protection Act 2023 is available here: https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf last accessed on 6 January 2026. The enforcement timeline for the Digital Personal Data Protection Act 2023 is available here: https://www.meity.gov.in/static/uploads/2025/11/c56ceae6c383460ca69577428d36828b.pdf last accessed on 6 January 2026.

[2] Press Trust of India, ‘Govt prefers existing laws over new regulations to govern AI; focus on innovation: MeitY Secy’, https://www.ptinews.com/story/business/govt-prefers-existing-laws-over-new-regulations-to-govern-ai-focus-on-innovation-meity-secy/3193632 last accessed on 6 January 2026.

[3] Regulation (EU) 2016/679, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 last accessed on 6 January 2026.

[4] United Kingdom General Data Protection Regulation, https://www.legislation.gov.uk/eur/2016/679/contents last accessed on 6 January 2026.

[5] EDPB report entitled AI Complex Algorithms and effective Data Protection Supervision: Effective implementation of data subjects’ rights, https://www.edpb.europa.eu/system/files/2025-01/d2-ai-effective-implementation-of-data-subjects-rights_en.pdf last accessed on 6 January 2026.

[6] ANI, ‘MeitY writes to X over "misuse of Grok AI" for obscene content, seeks action taken report within 72 hours’, https://www.aninews.in/news/business/meity-writes-to-x-over-misuse-of-grok-ai-for-obscene-content-seeks-action-taken-report-within-72-hours20260102203123/ last accessed on 6 January 2026.

[7] Wired, ‘Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis’, https://www.wired.com/story/google-and-openais-chatbots-can-strip-women-in-photos-down-to-bikinis/ last accessed on 6 January 2026.

[8] TechCrunch, ‘French and Malaysian authorities are investigating Grok for generating sexualized deepfakes’, https://techcrunch.com/2026/01/04/french-and-malaysian-authorities-are-investigating-grok-for-generating-sexualized-deepfakes/ last accessed on 6 January 2026.

[9] The Digital Personal Data Protection Rules 2025, https://www.meity.gov.in/static/uploads/2025/11/53450e6e5dc0bfa85ebd78686cadad39.pdf last accessed on 6 January 2026.