Walking a tightrope on AI regulation

William Roberts, IBA US CorrespondentMonday 22 January 2024

US Vice President Kamala Harris (R) looks on as President Joe Biden signs a new executive order guiding his administration's approach to artificial intelligence during an event in the East Room of the White House on 30, October 2023 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

Legislators worldwide are caught between seeking to regulate the safe use of artificial intelligence and harnessing the opportunities it presents, as Global Insight reports.

In late 2023, representatives of 28 nations met with technology executives, researchers and civil society groups at Bletchley Park in the UK to discuss artificial intelligence (AI).

The symbolism of the location in the English countryside was unmistakable. Bletchley Park was where, during the Second World War, a team of code breakers led by Alan Turing cracked the German ‘Enigma’ code and, in the process, invented the world’s first digital computer. Now, with governments, corporations and investors pouring billions in capital into AI ventures, a new global economic contest is afoot.

The Bletchley AI Safety Summit was attended by China, the US and more than two dozen other nations, as well as executives from the European Union. The delegates agreed on a new ‘Bletchley Declaration’, setting global principles for the safe development of AI. Participants resolved to identify AI safety risks and build new regulatory policies in their respective countries. Global cooperation will be necessary, they agreed.

Government officials and industry leaders joined a statement calling for independent ‘safety testing’ of next generation AI models. The UK itself has launched a new AI Safety Institute, staffed by experts and advised by industry, to evaluate ‘new frontier’ AI models. Summit participants will meet next in a few months in South Korea and then again in France.

The Declaration recognises both the enormous opportunities and alarming risks that the rapidly evolving AI technology presents. But can AI even be governed? Or will the competition for jobs, which are growing faster in AI than in any other sector, according to the World Economic Forum, lead governments to prioritise innovation first?

Guardrails and referees

At Bletchley, the UK’s Prime Minister, Rishi Sunak, made clear the UK government, amid its impulse to regulate, would seek first to promote innovation, investment and jobs. Indeed, much of the conversation over the two-day summit was about ‘making sure that we’ve got a regulatory system that’s pro-innovation’, in Sunak’s words. ‘Of course, we always need guardrails on the things that will worry us, but we’ve got to create a space for people to innovate and do different things.’

An analysis by the International Monetary Fund estimates that 40 per cent of occupations worldwide will be affected by AI – and even more so in developed nations. Investors put $27bn into generative AI start-ups in 2023, according to data from capital market company PitchBook. The consultancy company PwC has estimated that the economic impacts related to AI could reach $16tn globally by 2030.

Since [the EU AI Act] is the first comprehensive act of its kind in the world, we can expect it will influence other jurisdictions

Johan Hübner
Former Chair, IBA Artificial Intelligence and Robotics Subcommittee

Sunak was clear that he wants to promote AI with pro-investment tax policy and local entrepreneurial culture that encourages risk-taking. London is a location that’s emerging as a localised hotbed for AI start-ups and development, alongside San Francisco and Singapore. The UK will invest £100m to accelerate the use of AI in life sciences and healthcare. The hope and expectation is that the use of AI can help find treatments for dementia, cancer and other incurable diseases, Sunak said.

Meanwhile, the UK created the AI Standards Hub in 2022 to bring the government, tech companies, regulators, civil society and academia together to share knowledge and to support an agile, pro-innovation approach to AI governance. The UK government also announced at Bletchley that it has commissioned Canadian computer scientist Yoshua Bengio, a leading AI researcher, to produce a report on the ‘state of science’ to help inform policymaking.

All of this hardly sounds like ‘regulation’ but more like a sophisticated form of industrial policy designed to both create new jobs and navigate an uncertain future where computers will be doing more and more of the work humans do now. Politicians face a dual challenge of managing the trade-offs AI brings, between job creation and job destruction.

Tesla and SpaceX Chief Executive Officer Elon Musk gave his endorsement at Bletchley to the UK government’s approach. Predicting that future AI applications will be the ‘most disruptive force in history’, Musk sees a need for government to play a role. In his view, some in Silicon Valley, who perhaps don’t have much experience working with regulators, are concerned that this will mean that innovation is ‘crushed’, and everything is slowed down.

‘It will be annoying, that’s true. They’re not wrong about that’, he said. ‘But we’ve learned over the years that having a referee is a good thing. If you look at any sports game there’s always a referee, and nobody’s suggesting to have a sports game without one. And I think that’s the right way to think about this, is for government to be a referee to make sure there is sportsmanlike conduct and that the public safety is addressed.’

Legislating, fast and slow

As 2023 came to an end, EU lawmakers reached an agreement on a comprehensive framework to regulate AI. The bill takes a risk-based approach but also includes prohibitions on biometrics based on race and gender, facial and emotion recognition, including limits on law enforcement. The legislation, once passed, won’t apply until two years after it comes into force.

‘There has been agreement on generative AI, which is separate from the general risk-based approach, and also opening up for regulatory sandboxes not to stifle development’, says Johan Hübner, the former Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Delphi law firm in Stockholm. ‘Since it is the first comprehensive act of its kind in the world, we can expect it will influence other jurisdictions.’

There’s a lot of policy work, a lot of regulatory work yet to be done. Thus far, government is behind. Government is making excuses

Brad Thompson
Member, Epstein Becker Green

‘From a US-EU perspective I doubt it will change that much’, says Hübner. However, while ‘the US and China are still way ahead on development’, corporates in those jurisdictions will need to comply with the EU legislation if they want to be active in the bloc. Hübner doubts ‘that the global behemoths would want to limit themselves’, so, in his view, the legislation will have some effect. ‘The suffering parties will be EU AI developers who will face these rules whether they want to or not’, he adds.

The US appears to be, for now, taking a slower approach to AI, recognising the need for government oversight whilst teaming up with the private sector to promote innovation. Work on regulating AI is proceeding on two tracks in Washington. Firstly, the White House issued an executive order signed by President Joe Biden in October, directing US government agencies to set standards and adopt rules of conduct within their ambits. Secondly, the US Senate appears to be moving forward with legislation after taking more than a year to study AI and its implications.

Under the direction of Senate Majority Leader Chuck Schumer, the Senate over the past year has held a series of nine ‘insight forums’ involving experts in technology, labour, business, academia and civil rights, aimed at informing lawmakers. These meetings were more informal in nature, taking place outside of the traditional committee structure of Congress, which is now expected to take over in writing new laws. ‘One of the Senate’s top priorities will continue to be legislating on artificial intelligence’, Schumer said in remarks to the Senate in mid-January.

‘We have discussed everything from AI’s impacts on democracy, on our workforce, on national security, and thorny but technical issues like transparency, explainability, bias, and more’, Schumer said. ‘AI for sure will be one of the most difficult issues that the Senate has ever faced, but if there has been any consensus so far in these forums, it’s that Congress must intervene to promote safe AI innovation.’

Given the appearance of bipartisan consensus in the Senate that action is needed, it’s probable that, at the least, a substantive bill will be put forward and debated. Whether new law can be enacted before the November 2024 national elections is uncertain and perhaps unlikely.

In the meantime, Biden’s executive order serves as the controlling policy framework for AI regulation in the US. A sweeping order, the scope of the White House’s recommendations and required actions will apply to existing government regulatory functions already at work throughout the US economy. ‘The executive order is very long, very detailed. But an executive order is really just an instruction to government departments as to what they ought to be doing’, says Brad Thompson, a member of the firm at Epstein Becker Green in Washington, DC.

The executive order focuses on transparency and labelling of AI content and builds on voluntary commitments from leading tech companies to produce ‘safe, secure, and trustworthy’ AI, according to a White House fact sheet. It incorporates the AI ‘Bill of Rights’ that the White House issued in 2022 and which outlines a series of value points that should guide AI policymaking, such as safety, data privacy, anti-discrimination, public notice and the ability to opt-out of AI-driven systems.

‘If you take a historical view of what will be remembered in five years, I’m not sure we’ll be remembering very much about this executive order or, for that matter, the Bletchley Declaration’, Thompson says. ‘There's a lot of policy work, a lot of regulatory work yet to be done.’ Thus far, he adds, ‘government is behind. Government is making excuses.’

William Roberts is a US-based freelance journalist and can be contacted at wroberts3@me.com