Mitigating the risks of ‘shadow AI’
The use of ‘shadow AI’ tools – generative artificial intelligence technologies that aren’t approved for internal use within a company – by employees and third parties is leading to an increased risk of cyberattack, data loss and regulatory action. In-House Perspective explores how legal teams can mitigate the threats.
The use of ‘shadow AI’ tools – generative artificial intelligence technologies that aren’t approved for internal use within a company – by employees and third parties is leading to an increased risk of cyberattack, data loss and regulatory action.
Such tools aren’t approved for work purposes by an organisation because they’re deemed unnecessary, insecure or – very frequently – because the company doesn’t even know of their existence.
Workers and contractors are using popular off-the-shelf AI tools to speed up their work because they’re easy to use, reduce their workload and may be more effective than in-house tech.
However, commentators warn that organisations could be at risk of violating data privacy laws if, through the use of ‘shadow AI’, information is passed on by employees, vendors and contractors to train AI models without explicit consent, while potentially sensitive corporate data could also be leaked in the process. Another threat is ‘model poisoning’ – a type of cyberattack where hackers manipulate an AI model’s training data to corrupt its behaviour so that it delivers inaccurate, biased or even dangerous outputs.
Most of what can fall under the umbrella of ‘shadow AI’ are readily available tools, but they’re often used without regard to the employer’s cybersecurity or data protection policies. This is because employees tend to use them on their own personal devices, while third parties are either not asked to provide details of the types of AI they use or neglect to disclose it.
In a survey by tech news website Cybernews, 59 per cent of employees admitted to using AI tools that their employer had not approved, and three-quarters of those users said they’d shared sensitive information as a result. Executives and senior managers were found to be most likely to use unapproved AI tools at work, meanwhile.
Other surveys have suggested similar findings as the popularity of generative AI has soared. In a 2024 poll, German tech company Software AG found that personal AI tools are so valuable that nearly half of workers wouldn’t wish to stop using them – even if their employers banned them completely.
The use of shadow AI suggests organisations must recognise that there’s potentially a serious data and IT security issue. But the survey results also might make companies consider the need to accommodate the preference of employees in using tools they feel make them more effective at work.
Policies and problems
Many commentators believe that banning AI outright is counterproductive and that a more practical path therefore is to ‘thoughtfully embrace’ it by developing clear usage policies, especially as employees are already eager to adopt AI tools. Establishing internal AI app stores with approved tool catalogues can help provide users with more choices, for example, while also maintaining reasonable guardrails for usage. Similarly, best practice frameworks such as ISO 42001 give organisations a structured way to harness AI’s benefits – while mitigating its risks – and outline practical steps to guide responsible adoption, which will give boards, regulators and customers the confidence that AI is being governed with the same rigour as information security and data privacy.
Martin Schirmbacher, Member of the IBA Technology Law Committee Advisory Board, doesn’t believe that trying to ban unauthorised AI is practical. Instead, he thinks the most effective way to address employee use of shadow AI is to increase ‘visibility’ and build a structured governance framework. To achieve this, he says, organisations should establish monitoring measures to detect unapproved tools, coupled with a clear AI policy and approval process that ‘does not just prohibit, but which encourages responsible use and early notification to an internal AI board’.
A central element, he says, is education and awareness – employees must understand the legal, security and intellectual property (IP) risks of external AI tools. Equally important, he says, is offering approved and reliable alternatives so that innovation happens within governance structures rather than outside them.
“The AI policy should acknowledge that employees will experiment with new tools and should therefore focus on encouragement, transparency and control rather than prohibition
Martin Schirmbacher
Member, IBA Technology Law Committee Advisory Board
Schirmbacher – who’s a partner and specialist IT lawyer at law firm Härting in Berlin – doesn’t think that having a separate shadow AI policy is necessary, either. In fact, he believes they’re often counterproductive. Rather than creating parallel rules, he says, organisations should embed shadow AI management directly into a single, comprehensive AI policy. ‘The AI policy should acknowledge that employees will experiment with new tools and should therefore focus on encouragement, transparency and control rather than prohibition,’ he says. ‘Key elements include mandatory notification and approval procedures, secure testing or sandbox environments for responsible experimentation, and awareness training to make risks clear. Integrating these aspects into one AI policy ensures consistency, avoids duplication, and creates a governance framework that both enables innovation and mitigates legal, security and compliance risks,’ he says.
Additionally, the AI policy should explicitly set baseline safeguards for any testing or experimentation of new tools. For instance, employees must not use real production data, must avoid any processing of personal data (PII) and must not disclose confidential business information or IP during trials – unless and until the tool has successfully undergone a formal security and legal review.
Sharon Klein, Vice-Chair of the IBA Technology Disputes Subcommittee, adds that organisations need to do more to isolate AI use from its outputs. ‘Don’t input AI outputs into company systems, otherwise you risk corrupting corporate data,’ she warns. She also believes that companies should have a policy of AI self-reporting where both employees and third parties can submit details of the AI technologies they’re using so the organisation can deem whether they’re safe and/or allowable. Furthermore, she says companies should have AI policies that include explicit clauses about the non-disclosure of corporate information, and they should appoint a single point of contact who would grant approvals for what kinds of AI can be used – and how.
A key problem, says Klein – who’s a partner and Co-Chair of the Privacy, Security and Data Protection Practice at law firm Blank Rome in Los Angeles – is that employees and third parties may not be aware that there are any AI capabilities inherent in particular software or apps they’re using. ‘It is possible that a lot of the AI that employees and companies have on their devices may come from freeware or software updates, where people simply agree to installing the AI technology without proper informed consent as part of the vendor’s terms and conditions,’ she says. ‘Even if they do not wish to use the AI within the software, the technology may have access to data as it runs in the background, giving rise to concerns over data and IP loss.’
To mitigate such risks, Laura Land Himelstein, counsel at law firm Day Pitney in New York, says existing approval processes for new software should be reinforced to make explicit that they apply to AI and shadow AI. She adds that IT teams should actively monitor enterprise platforms such as Microsoft Office and Zoom for new AI functionality and ideally disable or configure features that may create an issue before they’re deployed to the workforce. Companies should also provide an accessible, regularly updated whitelist of approved AI tools and use cases that will further promote transparency and reduce the temptation for employees to turn to unauthorised shadow AI capabilities, she says.
Klein says that for some time, companies have been aware of the problem of unauthorised AI use by both employees and third parties ‘but they have no idea how to control it apart from putting policies in place’. Companies have tended to rely on ‘bring your own device’ (BYOD) policies as a way of monitoring employee AI use, but she warns that these don’t take into account that corporate data may not be protected if employees enter it into certain AI tools. Similarly, with regards to third parties, Klein says that many companies still rely on standard clauses that cover adherence to data protection and IP policies, rather than specifically prohibiting them from inputting the company’s data into any non-approved AI tools.
Managing third parties
Commentators tend to agree that monitoring and enforcing AI use policies will probably be more successful with employees than with third party contractors. Scott Laliberte, Global Leader of compliance consultancy Protiviti’s Emerging Technology Group, believes that while organisations can introduce strict controls to prevent the use of shadow AI in-house, they have more limited options to clamp down on the use of it by third parties. ‘Your risk mitigation comes down to only a few options – mainly, contractual provisions,’ he says.
Mike Scott, Chief Information Security Officer at data security software company Immuta, says the problem with the use of shadow AI by third parties is that ‘you’re trusting someone outside your organisation to make decisions about your data security. If a vendor or contractor uses a public AI tool – say, to summarise a dataset or generate code – you may never know that sensitive or regulated information has left your control’.
‘Without contractual safeguards, there is little assurance that external parties are handling data in a compliant or secure manner,’ adds David Hoppe, Managing Partner at law firm Gamma Law in the US. ‘In these cases, the company often has no visibility into what models are being used, where the data is stored, or how long it is retained.’
“Without contractual safeguards, there is little assurance that external parties are handling data in a compliant or secure manner
David Hoppe
Managing Partner, Gamma Law
To protect against third-party risks, commentators say companies should treat AI use as a central part of vendor management. Effective due diligence should include questions about which AI models are being used, how data is secured, how client information is kept separate and whether any inputs are stored or repurposed for training. For Schirmbacher, the focus should be less on technical controls and more on contracts and governance. ‘Agreements should clearly define how AI tools can be used, how data is processed, whether training is permitted and what the obligations are regarding deletion or return of data,’ he says. If third parties are closely integrated into workflows, a shared governance framework can ensure aligned security, IP protection and documentation standards, he adds.
While contractual protections are critical and should be explicit, the governance protocols utilised by companies for third-party AI use that require disclosure of such tools and audits of data handling practices should also be written directly into contracts. These should include the right to observe partner AI activities. ‘Organisations need to extend their governance protocols beyond internal users,’ says Scott. ‘That starts with clear contractual language around AI usage – what tools are permitted, what data can be used, and how activity will be logged and audited. But it also requires technical enforcement: purpose-based access policies, monitoring for abnormal behaviour and controls that apply to all data interactions – human or machine, internal or external.’
Hoppe too says that companies should reserve audit rights and require external partners to meet the same regulatory and security standards imposed internally, with obligations tailored to industry requirements such as the EU/UK General Data Protection Regulation (GDPR), the EU AI Act and/or the US Health Insurance Portability and Accountability Act (HIPAA). ‘By combining contractual commitments, technical boundaries and continuous oversight, companies can extend their governance perimeter to cover external actors as effectively as their own employees,’ he says.
Commentators advise that third party agreements should bar vendors from training models on company data, require certifications of data deletion or return of all information once the engagement ends, and mandate fast notification if any incident occurs. Regular audits and clear breach notification timelines should also be built into contracts, supported by ongoing monitoring and re-certification.
Companies should also maintain a central register of all partners who use AI on their behalf, along with the models and environments involved, and require high-risk use cases to undergo formal impact assessments before launch. ‘When a business partner makes poor AI choices, your company can still face legal consequences and fines,’ says Ryan Zhang, Founder of Notta, the company behind an AI-powered live transcription tool. He adds that there’s also the potential for a ‘domino effect’ where one vendor’s AI security breakdown can have an impact on all of their clients simultaneously.
Commentators warn that limited insight into the provenance of a third party’s tools can also create accountability gaps – for example, if biased, inaccurate or non-compliant results are generated, the company may still bear the legal and reputational fallout. These risks increase in regulated industries, says Hoppe, where improper handling of data by third parties can quickly translate into regulatory violations for the company itself. Klein agrees that organisations cannot outsource their compliance. ‘As a data controller, you are responsible for how that data is used, accessed, shared, and stored – not the third-party you have allowed to process and use it,’ she says.
The pivotal role of in-house counsel
Since there’s a need for greater assurance around how employees and third parties use AI –and potentially input employer/client data – in-house legal teams are probably going to see their activity in this area increase. Schirmbacher believes in-house legal teams can play a pivotal role in managing shadow AI risks. ‘Their impact is strongest in policy development, ensuring that AI governance frameworks are legally sound, practical and well communicated,’ he says. ‘They can also shape contracting and third-party governance, making sure obligations around data use, IP, and model training are clearly defined.’
Another key area where in-house lawyers can add value is in raising awareness and promoting better education, says Schirmbacher. This doesn’t only mean general training, ‘but turning key users in business units into ambassadors for AI governance, creating a multiplier effect,’ he explains. Furthermore, the legal team should ‘help embed AI into the company’s overall governance structure, linking it with risk management, compliance, IT security, and data protection so that shadow AI is managed systematically rather than [in an] ad hoc [fashion],’ he says.
Klein also believes there are several contributions in-house lawyers can make to mitigating the risks of shadow AI. Firstly, they can strengthen contracts with vendors and third parties. ‘If they want to use AI for work on your behalf, then they need to inform you of the AI they want to use and get contractual permission to use your data. They cannot proceed without your consent,’ she says.
Secondly, in-house lawyers ‘need to get ahead of developments’ as the technology evolves and the related risks change. ‘AI moves too fast to cover with a blanket AI policy,’ she says. ‘AI policies need to be reviewed regularly so that they are flexible enough to be “future proofed” for emerging risks.’
Klein also believes that in-house lawyers should push their organisations to employ training and use AI detection software ‘as standard’ to determine the extent to which AI is used, what its use cases are, who’s using it and what kind of technology is being used. She believes it’s important that organisations foster a culture where employees can question whether using AI is always appropriate. ‘Employees need to ask themselves: is AI the right tool for the job? AI is not a substitute for your own analysis. People should only use AI for tasks that they can easily check, rather than rely on it to produce answers for areas outside of their […] expertise or competence that become more difficult for them to verify. AI will always produce an answer – but it doesn’t necessarily mean it will be the right one.’
Given its simplicity, low cost and widespread prevalence, there’s a strong likelihood that employees and third parties will use some form of AI that organisations are unaware of and haven’t approved. While this can present serious risks, an even greater threat lies in not preparing the organisation for such a scenario. According to Scott at Immuta, ‘the reality is that shadow AI isn’t just inside your four walls anymore. If your governance doesn’t account for your entire ecosystem, you’re flying blind’.
“The reality is that shadow AI isn’t just inside your four walls anymore. If your governance doesn’t account for your entire ecosystem, you’re flying blind
Mike Scott
Chief Information Security Officer, Immuta
Neil Hodge is a freelance journalist and can be contacted at neil@neilhodge.co.uk