AI: US and EU policymakers move to regulate after industry rings alarm bells
The artificial intelligence (AI) bot ChatGPT, launched in November, has sparked the world’s collective imagination about what AI can do. Previously the domain of computer scientists and science fiction movies, AI is now the subject of significant attention in the US and beyond.
US policymakers are rushing to put guardrails around AI before it gets away from them. A robust debate is underway in Washington about how to both promote innovation in AI and safeguard the public amid a wider competition with China and Europe. ‘The primary domains for AI having an impact, at least in the near term, are areas like employment, healthcare, finance and consumer marketing’, says Bradley Merrill Thompson, a member at law firm Epstein Becker Green in Washington, DC.
Sam Altman, co-founder and Chief Executive Officer of OpenAI, which produced ChatGPT, has told Congress that government regulation of AI is needed. ‘My worst fears are that we – the field, the technology, the industry – cause significant harm to the world. That could happen in a lot of different ways’, Altman told the Senate Judiciary Committee in May. ‘If this technology goes wrong, it can go quite wrong. And we want to be vocal about that.’ He added that Congress should require licensing for major AI systems and establish a government agency to oversee AI’s regulation and risks.
US President Joe Biden met with AI industry leaders in San Francisco in late June to discuss the potential and threats the technology poses. The White House Office of Management and Budget is expected to issue a sweeping new order in the coming weeks providing guidance for US regulatory agencies.
We’re in a bit of a waiting game, because the industry really wants regulation
Chair, IBA Artificial Intelligence and Robotics Subcommittee
Meanwhile, lawmakers in Congress are preparing to draft legislation. Senate Majority Leader Chuck Schumer has announced plans to host a series of forums on AI with industry leaders and interest groups in the coming months, anticipating ‘ambitious’ legislative action.
But the emerging US agenda on AI opens new questions as to whether Washington and Brussels can bridge divides in digital regulation. ‘The earliest you’d see something [in the US] is next year, although you then get into presidential election politics’, says Samir Jain, Vice President of Policy at the Center for Democracy & Technology, a public interest advocacy group based in Washington, DC. ‘Now, there could be bits and pieces around AI that get inserted into other legislation. But I don’t think you’ll see any comprehensive framework this year from Congress.’
In mid-June, the European Parliament approved draft rules setting new global standards for AI. The EU’s AI Act applies a risk-based framework to AI companies, imposing increasing levels of compliance and self-governance requirements as the stakes involved rise. AI applications with minimal risk, such as video games, wouldn’t be restricted. Moderate risk services such as chatbots would have transparency obligations. Higher-risk AI technology used to screen for employment and AI used by major platforms to recommend content would have to conform to standards. AI techniques used for social scoring, public surveillance or subliminal manipulation of human behaviour would be banned.
‘We’re in a bit of a waiting game, because the industry really wants regulation’, says Johan Hübner, Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at law firm Delphi in Stockholm. ‘I will be very surprised if there will be federal [US] legislation. But given that the EU has taken such a proactive stance in this budding technology field’, it’s likely the EU law will become the standard that major tech companies adopt for their global operations, Hübner says, adding that US companies wishing to do business in Europe would need to adhere to the EU’s rules.
However, Thompson doesn’t ‘see a role, contrary to the European model which everyone sort of wants to follow, of a generalised regulatory paradigm for AI for the US. I just don’t think our legal system is built the same way. We have federal agencies that are domain experts at monitoring and tracking and regulating activities’ in key areas, Thompson explains.
In May, the European Data Protection Board (DPC) imposed a €1.2bn fine on Facebook’s owner Meta for violating EU data privacy rules. Meta’s lawyers intend to appeal the ruling, which the company called ‘unjustified’ and ‘dangerous’. At issue is Meta’s transfer of Europeans’ data to servers in the US, where it potentially can be accessed by US intelligence agencies under controversial post-9/11 mass surveillance programmes. The DPC has ordered Meta to stop transferring Europeans’ data and return to Europe or destroy data that was transferred, which can include photos, friends, messages and advertising preferences. Whether Meta will actually have to comply with the DPC order is unclear.
The US and EU are negotiating a new data privacy framework that would allow Meta to continue to transfer the personal data of users across borders as it does now. ‘The DPC order puts the pressure on that political situation to get that agreement authorised’, says Lokke Moerel, a senior of counsel at law firm Morrison Foerster in Brussels. She explains that the European Commission has issued a draft adequacy decision authorising the new framework, which itself is due this summer. ‘Now, if that decision is adopted, that would mean that Meta is off the hook as [far as] the transfers are concerned’, Moerel says. The record fine for past infringements remains in place unless Meta wins on appeal.
European concerns about personal data ending up in the US where spy agencies can access it parallel US fears that Americans’ data held by ByteDance’s TikTok app is available to Communist Party authorities in China. ‘Everybody thinks it’s ridiculous that we worry that EU data ends up in the US but in the end, it’s a similar issue’, Moerel says.
Meanwhile, the emergence of AI systems adds a whole new layer of complexity to data privacy issues. As AI advances, it’s being used to predict consumer behaviour based on personal data. And at the recent Senate hearing, senators worried about the impact of AI fakes on public opinion heading into the 2024 US presidential election and whether creative artists and musicians can assert intellectual property rights over AI products that mimic their work. ‘We all have to be humble about our forecasts and forecasting is such a hard thing’, says Thompson. ‘But the fact of the matter is, there’s an awful lot of applications for AI at this juncture.’
Image credit: putilov_denis/AdobeStock.com