Already an IBA member? Sign in for a better website experience
The IBA’s response to the situation in Ukraine
In mid-February, a video of Manoj Tiwari, President of India’s ruling Bharatiya Janata Party, was released on WhatsApp. In it, Tiwari speaks convincingly in Haryanvi, a Hindi dialect spoken by voters the BJP is targeting. The video is a deepfake: in the original recording, Tiwari speaks in English. The footage marks the first time a political party has used deepfake technology for campaigning purposes.
A ‘deepfake’ is new video and audio of a person generated by an AI algorithm learning from real footage. While deepfakes have been more readily associated with revenge porn, their hugely disruptive potential in the era of state-backed fake news is alarming governments and lawmakers. In June last year, Adam Schiff, Chairman of the US House of Representatives Permanent Select Committee on Intelligence, warned of ‘nightmarish scenarios that would leave the government, the media, and the public struggling to discern what is real’.
Katherine Forrest, a partner at US law firm Cravath, Swaine & Moore, is not convinced the law in its current form will be able to cope with the implications deepfakes present. ‘Deepfakes are going to radically change the way in which videotaped evidence is viewed in courtrooms and other legal proceedings,’ she says. Also, identifying them is hugely expensive and time-consuming. ‘We don't yet have the ability to quickly deploy all of the resources that would be required to bring a legal challenge and have it immediately resolved.’
Partner and Head of Cyber at Mishcon de Reya
Even the legal basis of such a challenge might be up for debate. From existing defamation to privacy laws, deepfakes do not fit into one legal category. ‘They’re entirely novel creations,’ says Forrest, therefore ‘you might also be able to get around certain kinds of copyright claim because of the transformative nature of the work.’
For John Buyers, an IT specialist at Osborne Clarke, there’s also a ‘fundamental problem of legal causation’ when it comes to artificial intelligence. ‘These machines are coming up with outputs and decisions autonomously,’ he says. ‘So, it becomes really hard to trace the bad actors.’
Certainly well-resourced, state-backed criminals are unlikely to be found, says Joe Hancock, partner and Head of Cyber at Mishcon de Reya. ‘The people most likely to be caught are the stupid or the innocent,’ he remarks. ‘The student who makes a deepfake of his friend, which gets passed around social media, gets a huge penalty.’
Identifying the most serious culprits is a difficult task, exacerbated by the rapid evolution of deepfake technology as part of a digital arms race between criminals and law enforcement agencies. As a result, many high-ranking government officials, like Congressman Schiff, have turned the spotlight of responsibility for deepfakes on the hosts: social media platforms.
Facebook has been at the heart of an ongoing debate about online free speech in relation to fake news, particularly following the allegations of Russian interference in the last US presidential election. The tech giant faced fierce criticism from some quarters for refusing to remove content. Then, in January this year, the social network announced a new policy banning deepfakes.
The following month, European leaders began talks on new rules and regulations concerning artificial intelligence and data privacy. Facebook Founder and Chief Executive Mark Zuckerberg meanwhile requested greater regulation from governments. In an op-ed in the Financial Times, he described ‘more oversight and accountability’ as key to public faith in online platforms such as Facebook.
Traditionally, the US government has been far more reluctant to regulate the tech industry than its European counterparts. Wearing his ‘American, First Amendment media lawyer hat,’ Robert Balin, past Chair of the IBA Media Law Committee and partner at Davis Wright Tremaine, recognises the danger of deepfakes but worries ‘that the cure could be worse than the disease’.
‘There is no question that there is a positive and essential role for Congress – or any legislative body – in outlawing deepfakes by using the criminal law avenue to pursue malicious actors,’ he says. ‘The harder question, though, which requires care and thought, is: what is the role of social media platforms? Will they be compelled to make changes?’
At best, greater regulation is a blunt instrument; at worst, a quick way to stifle innovation. Either way, it appears inevitable. For Buyers, the more strategic, concerted approach adopted by the European Union is better than the ‘patchwork’ of legislation he fears the UK is heading towards. ‘I can see it applying to specific harms, but nothing that’s going to have a consistent reach over the technology,’ he says.
For Hancock, approaching deepfakes as a ‘side effect’ of a bigger issue is key. ‘We need to tackle fake news, we need to tackle state interference in the political process by digital means,’ he stresses. Legislation or regulation needs to be ‘broad enough to capture this stuff but also specific enough that it’s actually useful.’ Only then can we ‘tackle deepfakes and also start to tackle technology we probably haven’t even thought of yet.’