Regulatory divergence presents obstacles for legal teams navigating AI

Neil HodgeWednesday 23 August 2023

The lack of a harmonised approach to regulating artificial intelligence will create difficulties for companies and their in-house legal teams, as In-House Perspective reports.

Artificial intelligence (AI)-based technologies appear to be developing faster than the regulations meant to police them. Governments, meanwhile, are often unsure about how prescriptively they should legislate for technological advances due to concerns that any rules put in place could soon be made redundant. The fact that AI-led innovations are set to exponentially stimulate economic growth also makes developing effective regulation and oversight politically difficult.

Europe is trying to take the lead. On 13 June the EU moved a step closer to passing its landmark AI Act, with the European Parliament approving the legislation. It’s expected that the new rules will be finalised at the end of 2023 and that they’ll come into effect in 2027, being consistently applied by data regulators across the 27-nation bloc. Aimed at protecting consumers from any unintended consequences that AI technologies and machine learning (ML) might create, the AI Act follows a risk-based approach and regulates such systems in accordance with the level of risk they present.

The AI Act will have extraterritorial effect, with highly punitive fines – at up to six per cent of global turnover or €30m, whichever is the higher figure – so any business wishing to access the EU market will need to comply with the legislation, irrespective of their place of origin. While the AI Act holds tech developers to account for the products and services they market, organisations using AI systems should also be aware that they’re potentially liable for any failings linked to the technology’s design too if the way they use the AI causes harm. In short, organisations using the tech or developing AI systems in-house are equally liable under the legislation.

Data regulators worldwide are trying to guarantee that AI tech developers ensure privacy by design and by default in their products, while also instilling in companies the need for any AI use to be safe, ethical and legally compliant. In June, for example, the UK’s Information Commissioner’s Office (ICO) recommended that organisations begin to use privacy enhancing technologies (PETs) more generally to share people’s personal information safely, securely and anonymously. In July the US government secured voluntary agreements from tech giants including Google, Microsoft, Amazon and Meta, who vowed to ensure the safety of AI products and services they provide and to improve transparency about how AI is used in their services.

However, as yet, there’s no agreed harmonised, global approach. As a result, says Adam Rose, Chair of the IBA’s Data Protection Governance and Privacy Subcommittee and a partner at Mishcon de Reya in London, the variety of regulatory approaches across different jurisdictions will create difficulties for companies and their in-house legal teams.

For example, in post-Brexit UK, it’s likely that digital regulators will continue to take a ‘light touch’ approach, in contrast to the direction of travel in the EU, says Rose, where the AI Act takes an overarching legislative approach. In the US, however, there’s currently no federal approach to AI regulation, leaving a patchwork of existing laws that are being made to fit. Meanwhile, he says, both China and Brazil are introducing what look like comprehensive and prescriptive legislative programmes, while India hasn’t yet introduced any specific legislation. Some countries, such as South Korea – expected to have passed AI-specific legislation by the end of 2023 – currently have risk-based AI frameworks in place.

Inevitably, says Rose, companies must recognise that as different jurisdictions may have very different laws in place, they’ll need to take a varied approach about how they may use AI in one country from another. ‘Understanding what steps individual regulators are taking, and keeping abreast of the various steps and guidance that regulators are announcing is a huge task, especially for companies with a global footprint’, he adds.

In terms of the action companies – and in-house lawyers – should take to ensure the organisation is using AI in accordance with data privacy rules and best practice, Rose says it’s important to stress that the legal risks of using AI, especially generative AI, are, ‘if not completely novel, still relatively untested and underexplored’. Simultaneously, the technology – and the use cases – are developing at a fantastic rate. ‘This inevitably presents a complex risk landscape, and companies with their lawyers will need to do their modelling accordingly’, he explains.

Certainly, within a European context – which includes the UK – companies should introduce a comprehensive system of data impact assessments incorporating data protection, but also consider anti-discrimination and human rights more generally. Failure to do so is likely to put companies at legal and regulatory risk and, ‘for that reason, it would be sensible for a similar approach to be adopted even in jurisdictions where such impact assessments are not standard practice’, says Rose.

For Doil Son, Co-Chair of the IBA Technology Law Committee and a partner at Yulchon in Seoul, the main concerns for data regulators more generally centre on how AI uses personal data and whether it infringes intellectual property rights when it collects data to ‘learn’ from. ‘Using data for AI learning becomes a difficult issue, especially when AI technology simply scrapes as much publicly-available data as possible from websites without regard to whether it is accurate, necessary or harmful, especially if that data is replicated or republished’, he explains.

To protect themselves from potential regulatory action, Son says companies should make it clear to customers that they’re using AI and that the technology isn’t infallible. ‘Companies need to be transparent about AI use, put disclaimers on the services they are providing using AI, and warn customers that the technology can make mistakes. This will help build trust,’ he says. Companies should ensure the data they’re using is legal and that their work with it is in compliance with intellectual property and privacy laws, adds Son. They should also ‘maintain that the algorithm they are using works properly and ethically and be able to show the history of how it has been developed to ensure transparency’, he says.

“Companies need to be transparent about AI use […] This will help build trust


Doil Son, Co-Chair, IBA Technology Law Committee