AI and regulatory sandboxes

Neil HodgeMonday 16 March 2026

Jurisdictions around the world are increasingly promoting the use of regulatory sandboxes as a way of encouraging safe innovation, including for AI products. In-House Perspective asks what in-house counsel should consider when weighing up participation in such initiatives.

In an ideal world, technological innovation would always have compliance by design at its core, coupled with effective regulation and appropriate enforcement to prevent harm. Sadly, this isn’t always the case, as some companies rush to market, hoping their products and services are compliant while regulators have the difficult task of attempting to get businesses to remedy any harm caused.

Besides trying to persuade tech developers and deployers to focus more heavily on compliance from the outset, jurisdictions around the world are increasingly promoting the use of regulatory sandboxes as a way of encouraging safe innovation. Fintech and data services are the main areas of focus here, but regulators in sectors such as energy and healthcare are also encouraging more companies to consider trialling their planned products and services in a sandbox pre-launch.  

Given that the EU was the first region to establish specific rules governing the use of AI, it’s perhaps unsurprising that the bloc is aware of the benefits of the technology. The EU’s Digital Europe programme funds the EUSAiR project, which aims to create safe spaces for companies to test their AI products, while the bloc has legislated to require Member States to establish at least one regulatory sandbox by August 2026 – and several have already done so.

Similarly, China – one of the world leaders in AI tech innovation – is promoting sandboxes as a way of ensuring best practice. They have already been used by Chinese companies in respect of SME financing and anti-money laundering tools, for example.

In October the UK announced plans for the AI Growth Lab, to allow companies to test new products in real-world conditions but with some rules and regulations temporarily relaxed under ‘strict supervision’. Healthcare, professional services, transport and manufacturing are set to be among the first sectors to benefit from the changes, with the scheme focusing on industries where a change of rules may be necessary to enable innovation – either by modifying existing laws or passing new legislation.

Also in 2025, the UK’s Financial Conduct Authority (FCA) set up the AI Live Testing initiative. Participating financial services companies have a safe space in which they can develop their AI products with compliance front of mind. The FCA has used sandboxes since 2016, while the Bank of England – the UK’s central bank – began operating the Digital Securities Sandbox, together with the FCA, in 2024.

Playing in the sand

Commentators point to regulatory sandboxes as having several advantages. They can help drive innovation by clarifying regulatory expectations, enable closer engagement between companies and regulators, and provide credibility to organisations using them, for example –all of which results in an acceleration of product development and investment.

Molly Su, Vice-Chair of the IBA Financial Innovations Subcommittee, says that from a corporate perspective, a regulatory sandbox provides a controlled, real-world testing environment that allows companies to gather authentic data and user feedback, which then significantly accelerates product iteration, shortens the overall innovation cycle and lowers costs. Crucially, she says, it helps mitigate regulatory uncertainty in emerging sectors by helping companies identify compliance boundaries and clarify requirements. ‘This allows companies to gain experience and prepare for full compliance obligations before a wider market launch,’ she says.

Tina Herbing, Chair of the IBA Alternative Finance Subcommittee, says sandboxes offer a structured way to test new technologies and receive input on regulatory aspects in a safe environment, which may reduce post-launch legal risk and compliance costs. Additionally, sandboxes can also help regulators keep pace with technological developments and craft more sensible market-oriented guidance that benefits the market, as well as those developing or deploying AI, she says.

Companies may also gain improved investor confidence from using a regulatory sandbox, says Martin Ebner, Chair of the IBA AI Banking and Financial Subcommittee. This is because supervisory involvement lowers perceived regulatory risk. Additionally, he says, sandbox documentation and exit reports can support and accelerate subsequent conformity assessment under the EU’s AI Act, the bloc’s primary legislation with which to promote effective AI governance.

Daniel Bellau, a partner at law firm Edwin Coe in London, believes that one of the largest positives is early, constructive engagement with the regulator at a point when the technology is still evolving. ‘That engagement can be genuinely very useful. It gives earlier visibility of regulatory and liability risk, insight into how existing rules are likely to be interpreted in practice, and a chance to sense-check assumptions before going live,’ he explains. ‘Sandboxes can also improve internal discipline by bringing legal, compliance and product teams together much earlier, and they create a documented compliance journey that can be helpful if decisions are later scrutinised.’

“Sandboxes can improve internal discipline by bringing legal, compliance and product teams together much earlier, and they create a documented compliance journey


Daniel Bellau
Partner, Edwin Coe

Those working in the tech industry also acknowledge the benefits of sandboxes. Janet Bastiman, Chief Data Scientist at anti-financial crime tech company Napier AI, says one of the major benefits from an AI perspective is access to data that is ‘more representative of the entire ecosystem than just the direct customers of your company,’ which ‘helps reduce bias and ensure that training is more accurate’.

Meanwhile, Evin McMullen, co‑founder and CEO of Billions Network, a digital identity verification platform, says another advantage is that ‘consumer protection is built in from day one’. She says that ‘regulators require clear disclosures, compensation schemes and exit ramps which are often stricter than normal market rules during the test but not necessarily relevant outside the experimental space’.

Not for everyone

However, companies should remember that sandbox participation ‘is not a shortcut to licensing or compliance,’ says Ebner, who’s also a partner at law firm Schoenherr in Vienna. Nor do regulators select many products or services for sandbox participation, he adds.  ‘Sandboxes are resource-intensive for authorities, and they accept only a small number of projects with clear public interest, novelty, readiness and testability,’ says Ebner. ‘This creates selection dynamics and queueing, particularly for complex or cross-sector AI systems.’

Ebner further explains that the operational burden is substantial, with companies needing to produce detailed sandbox plans, risk assessments, metrics and frequent reports whilst opening their technology and processes to supervisory scrutiny. Critically, he adds, ‘sandboxes do not fully resolve multi-regulatory complexity’ as financial AI systems often sit at the intersection of other legislation and regulations – which can be extraterritorial, such as the EU’s AI Act – sectoral regimes, anti-money laundering and terrorist financing rules, as well as consumer and data protection law.

Another problem is that the relatively small scale and fixed timeframes of most sandboxes mean companies often can’t test products thoroughly and get a grasp of how they’ll perform at a larger scale, says Herbing, who’s a partner at Gorrissen Federspiel in Copenhagen. There’s also ‘inherent uncertainty about how the AI system will perform and what regulatory requirements will apply once it exits the sandbox and is deployed in real world conditions,’ she adds.

Su, who’s a partner in the Shanghai office of King & Wood Mallesons, agrees there are limitations as regulatory sandboxes operate under defined parameters regarding time, technology and scope. This ‘laboratory’ setting may not always fully replicate the complexities of the broader commercial environment, she says. Participation also requires a significant investment of time and manpower, adds Su, as companies must accumulate data and engage in continuous dialogue with regulators to determine compliance paths. Further, ‘completing a regulatory sandbox test may not guarantee that a product can immediately scale, nor does it eliminate future compliance risks once the product exits the controlled environment,’ says Su.

Others agree that one of the most common mistakes companies make is thinking that acceptance into a sandbox guarantees compliance – or at least will garner less regulatory attention. ‘Sandbox participation can facilitate but not guarantee full compliance,’ says Daniela Birnbauer, an attorney at Schoenherr in Vienna. In reality, she explains, companies with AI products, for example, ‘must still complete applicable conformity assessments under the AI Act [in the EU] and obtain or vary financial licences where required’.

It’s also incorrect to treat participation as a public endorsement, she adds. For example, Austrian law explicitly prohibits using sandbox participation as a ‘seal of approval’. Nor is there any publicly available quantitative evidence that companies in Austria – or anywhere else – that have used a regulatory sandbox are systematically more successful in terms of revenue, survival or growth than similar companies that didn’t use them, she says. It’s a point other lawyers raise, too.

James Klein, a partner at law firm Spencer West in London, isn’t sure there’s any real or hard evidence that companies that have used sandboxes have been more ‘successful’. He says ‘there may be some evidence of slightly faster market entry in some instances due to uncovering issues at an earlier stage, but success is more likely to come down to the quality of the product and/or service itself’.

The UK FCA’s regulatory sandbox, for example, has supported over 800 companies and helped establish the UK as a Fintech leader. However, not all sandbox participants who completed testing in the first cohort received investment during or following their tests. The FCA noted in a 2017 review that ‘at least’ 40 per cent of companies completing testing in that cohort did so.  

Considerations before embarkation

Before deciding to enter a regulatory sandbox, Su says companies should address three core considerations. First, they should distinguish technical innovation from regulatory ambiguity. ‘If a product is technically advanced but operates within clear legal frameworks – meaning it faces no actual regulatory conflict – a regulatory sandbox is likely unnecessary,’ she says. Second, companies must evaluate if they can define a test scenario that is reasonably limited – for example, by restricting the user base, region, transaction caps or duration – while remaining representative. Crucially, counsel ‘must consider whether this limited version will still yield meaningful data for the actual business model,’ says Su, and finally, companies should remember that ‘joining a regulatory sandbox is not a waiver of liability’.

Su believes there are several issues that in-house counsel should be aware of if their organisations are thinking about testing their products/services in a regulatory sandbox. For example, in-house lawyers should strictly manage marketing communications to avoid using misleading terms such as ‘government approved,’ ‘certified’ or ‘guaranteed compliance’, and ensure regulatory requirements are embedded within product design and contract terms to achieve compliance.

Ebner says in-house counsel should first confirm strategic fit and readiness if their organisations are considering applying for a regulatory sandbox. In the case of AI products, in-house lawyers should check that the company believes that the use case is ‘genuinely innovative, testable with real users under safeguards, and capable of benefiting from supervisory engagement’. A complete regulatory map is also essential, he says. In the EU, this would cover AI Act classification – which sets out the level of risk associated with the AI models being used – sectoral licensing and conduct rules, data protection laws, anti-money laundering and countering the financing of terrorism rules, and other applicable regulations.

Ebner also advises in-house counsel to treat the sandbox as an ‘evidence generator’ by planning tests to produce results that address AI Act obligations – such as risk management, data governance, technical documentation, transparency, human oversight, accuracy/robustness – and sectoral requirements, with a clear exit path. He adds that ‘where the innovation can proceed within existing frameworks without sandbox tools, a direct authorisation strategy may be faster and less burdensome’.

He also warns that in-house counsel should be aware of the risk of critical liability, highlighting that participating companies remain liable for damages caused during testing. ‘Following the sandbox plan and guidance in good faith protects participants from administrative fines during the test, but this is not immunity from civil liability,’ says Ebner.

“Following the sandbox plan and guidance in good faith protects participants from administrative fines during the test, but this is not immunity from civil liability


Martin Ebner
Chair, IBA AI Banking and Financial Subcommittee

As a basis for marketing AI systems, Herbing believes companies considering sandboxes need to assess whether participation is ‘relevant’ or if their existing compliance approach is ‘adequate’. In particular, she says, in-house lawyers should question whether the sandbox’s timeframe and scope allow for proper testing, or if participation might unduly delay commercialisation of the system. They should also consider what happens when the AI system exits the sandbox, and what benefits the testing environment may have brought by that point.

In-house lawyers should further assess whether the company is comfortable with the level of regulatory engagement required – which could involve sharing detailed information about how products/services work – as well as what confidentiality undertakings will be in place as part of the process.

‘Some organisations will conclude that the commitment required, combined with the uncertainty about what regulatory treatment they’ll face post-sandbox, doesn’t justify participation,’ says Herbing, ‘particularly if they’d rather not invite close regulatory examination of potential compliance issues.’

Other commentators agree that such scrutiny and need for disclosure could prove to be inhibitors. ‘Some leadership teams worry that engagement increases regulatory visibility or creates discoverable records that could be scrutinised later, even though the practical benefit of early risk surfacing may outweigh that concern when handled with appropriate legal privilege strategy and clear internal communications,’ says Nathalie Moreno, a data protection, cybersecurity and AI partner at law firm Kennedys in London.

Dario Perfettibile, Vice President and General Manager of European operations at data tech company Kiteworks, says AI companies considering a regulatory sandbox need to think carefully about data governance and security infrastructure. They must be able to demonstrate comprehensive audit logging, data provenance tracking and incident response capabilities throughout the testing phase, since regulators will scrutinise how AI training data is sourced, how model decisions are recorded and how quickly failures are reported, he says.

Cross-jurisdictional regulatory alignment is also an issue that needs to be addressed, says Perfettibile. ‘Sandbox approval in one jurisdiction doesn’t carry over to others, so in-house counsel should map sandbox requirements against broader frameworks’ such as the EU General Data Protection Regulation and US standard setter the National Institute of Standards and Technology’s AI risk management framework, and ensure their data governance setup supports multi-jurisdictional compliance, including data residency and cross-border transfer rules. Further, he warns that sandbox environments rarely replicate real-world data volumes or integration complexity, therefore ‘counsel should verify that audit logging, access controls and encryption will hold up at full production scale without architectural overhauls’.

The current state of take-up

There are several reasons why sandboxes are not more commonly used, says Su, with a major factor being cost. ‘Companies face significant resource costs in preparing materials, defining business scenarios and maintaining regulatory dialogue,’ says Su. ‘Since the regulatory sandbox does not offer a waiver for compliance – and the post-exit regulatory landscape continues to evolve – many companies carefully weigh the return on investment.’

Another concern is that sandbox initiatives often require aligning pilot programmes with existing statutory requirements and coordinating across different government departments. This is a careful process, which can naturally influence the pace of expansion for the regulatory sandbox model, adds Su.

Specific national requirements can also influence whether companies take part, says Su. In China, for example, in sectors such as AI governance, the current regulatory focus is on mandatory frameworks, such as the ‘filing and security assessment’ required by the Interim Measures for the Management of Generative AI Services, the country’s key AI regulation. Most companies are prioritising resources to meet these statutory obligations first, she says.

“Well-resourced incumbents with strong regulatory teams may not need sandbox tools, preferring direct authorisation routes


Daniela Birnbauer
Attorney, Schoenherr

Meanwhile, in Europe, Birnbauer believes there are two main reasons why regulatory sandboxes are not more commonly used. First, sandboxes are still maturing under the EU AI Act, with many Member States building structures, staffing authorities and finalising operating procedures. Although the legal deadline for national sandboxes to be set up under the AI Act falls later in 2026, ‘national authorities remain cautious about scale as sandboxes require intensive supervisory engagement,’ says Birnbauer.

Secondly, the cost-benefit calculus can dissuade participation. ‘Well-resourced incumbents with strong regulatory teams may not need sandbox tools, preferring direct authorisation routes,’ says Birnbauer, while some companies misjudge the marketing value once they learn that endorsement claims are prohibited.

‘Sandboxes are still underused because many businesses see regulation as something to avoid rather than engage with, or assume these programmes are only for heavily regulated sectors,’ says Ross McNairn, CEO of Wordsmith AI, a legal tech company. ‘But as AI and legal tech grow, that mindset needs to change.’ A well-run AI sandbox, he says, could help companies innovate responsibly, ‘while building trust and accountability from day one’.