Are you an ostrich?: the wave of novel AI uses

Thursday 4 April 2024

Joanna Curtis
Brown Rudnick, London
jcurtis@brownrudnick.com

Jane Colston
Brown Rudnick, London
jcolston@brownrudnick.com

     

What is ‘generative AI’?

‘Generative AI’ refers to computer programs that automatically generate text, images or other media. It first really caught the headlines in November 2022 when OpenAI launched its ChatGPT service, one of the first large language model (‘LLM’)- powered generative AI products to become accessible to consumers and businesses for free. Other firms soon followed suit with their own products. GPT-4 was launched in 2023, and there are now myriad LLM-based and image-based generative AI products on the market for users to choose from according to their use case and budget. The pace of technological advancement is incredibly fast and this presents a practical challenge in keeping up to speed, which in turn can trigger a fear of the unknown. But what we mustn’t do is stick our heads in the sand.

To understand the fraud risks that generative AI presents, it’s necessary to have a basic understanding of how it works. Take LLMs: (1) they are computer programs trained on a large dataset of human language (often drawn from internet sources); (2) they compute the probability of words appearing in proximity to each other; and (3) given a word-based prompt, they generate a result, in human language, based on that probability. Image-based models use an essentially similar process, trained on a dataset involving images, and producing AI-generated images in response to a word-based prompt. Other generative AI models can also produce sound and video output.

The power of generative AI lies in its ability to successfully mimic human-generated media, and to do so at a speed and volume that can outperform a human. It is often accessible on an open-source basis, which allows the technology to be used and manipulated without any oversight by the companies that produced it (except the data gathered may then be used to train the AI).

So how could this increase fraud risk for businesses? Let’s look at two key risks: deepfakes and phishing.

Deepfakes

Deepfakes are video or sound files that use existing recordings of an individual in order to create a fake, or manipulated, ‘recording’ mimicking them doing or saying something they didn’t. In February 2024, a finance worker in Hong Kong paid $25 million after fraudsters used deepfake recreations on a video call, posing as his company’s London-based CFO and other staff members.[1]

With more and more (legitimate) images of powerful individuals now being posted online, there is ever more material available that a generative AI model could ‘learn’ from and manipulate. For example, webinars and media appearances are now routinely recorded and published on company websites and social media.

One risk from deepfakes is therefore the enhanced threat of confidence fraud: attempts to extract payment by means of deception. As with the Hong Kong worker, deepfakes can exploit the very means by which we are accustomed to verifying the authenticity of a request. A faked email request, followed by a deep-faked phone call or voicemail, is potentially very convincing.

A second risk is reputational. Deepfakes circulated online can cause significant damage to brand and reputation. Fraudsters are already using deepfake celebrity videos on social media to endorse scam products. The same method could be used to create large numbers of fake social media profiles to post negative reviews or false accusations.

There are also real concerns over the use of deepfakes to influence elections across the globe, especially given the number of countries going to the ballot box this year.

Phishing

Phishing is the use of scam emails, texts or phone calls to trick victims into making a payment or downloading ransomware. It is not a new form of fraud, but with generative AI, it becomes possible, at scale, to: (a) harvest information from someone’s online presence (social media, firm website); (b) use it to send them a personalised message, sounding like their best friend, their mum or their boss, and include specific contextual information to make it sound true; and (c) engage that person in an email exchange to persuade them to transfer money. If everyone in your firm gets a personalised message like this, the odds increase that someone will fall for it. The English Supreme Court last year held that a customer of a bank cannot (generally) look to their bank to compensate them for loss suffered from a phishing fraud given the bank’s main duty is to honour the customer’s instruction to pay.[2] This then will often leave the paying party victim without remedy against a solvent and known defendant. 

How can we protect against these risks?

Unfortunately, detecting whether text or images have been AI-generated is not reliable. In July 2023, OpenAI pulled its own AI-detection product due to its unreliability.[3] This is because AI-detection products are largely based on a similar model to the LLMs themselves – that is, they assess text or images based on the probability of words (or types of pixels) being used in proximity to one another, and use that to determine the probability of the text being AI-generated. The better the AI gets at mimicking human-generated media, the harder it is to detect.

Generative AI can be used to deceive machines, too. Technology is now used in a variety of ways to confirm client IDs (eg, to facilitate bank transfers), including voice recognition, face recognition and fingerprint ‘touch-ID’. These are now all potentially vulnerable to being faked or mimicked by AI, and at scale. The target for deception here is the very software designed to detect real human beings. A fraudster could potentially obtain stolen biometric data from a data leak or hack and use a generative-AI model to either impersonate a real human being, to initiate a payment or create a new account in which to hide criminal assets.

So, what can businesses do?

AI-powered monitoring

AI models are excellent at spotting patterns and can be used to monitor the use of a business’ electronic network or bank account and identify unusual patterns which could potentially indicate fraudulent activity.  

Educate staff

Businesses should already be educating staff about fraud detection, including how to recognise a phishing attempt and verify documents (eg, look closely at email addresses, hover the mouse over suspicious links and encourage people to make phone contact with co-workers who appear to have contacted them in respect of payment requests). Staff should be taught about how new AI technologies might increase the risk of these types of fraud.

Controls

Businesses should ensure their processes and controls (especially for making payments) are clear, robust and adhered to.

Multi-factor authentication

Even if some types of ID-verification can be impersonated, multi-factor authentication reduces the likelihood that a fraudster will have all of the types of ID needed to access a user’s account. Businesses need to think: ‘something you have’ (such as a smart card or smartphone), ‘something you know’ (such as a password) and ‘something you are’ (such as a fingerprint or biometric method).    

Watermarking of AI-generated content?

We are currently a long way from any enforceable requirement that all AI-generated content be watermarked. The fact that generative AI models have been made available open-source, and the way that the models respond to users’ prompts, means that their creators would struggle now to impose any watermarking that users could not override. Imposing ‘guardrails’ to prevent offensive or dangerous content faces similar challenges.

Watermarking to protect proprietary documents

However, businesses can still use watermarking to protect their own proprietary media. For example, a business distributing confidential material could apply a hidden identifier to each copy, which is unique to each recipient so that any future leaking or manipulation can be detected and the source identified.

Data security

Businesses that are individuals are more vulnerable to fraudulent attacks if their confidential or personal data is stolen or leaked. Be vigilant about data security.

Reputation management

Businesses should have a plan in place for media communications should they become a target for deepfakes and adverse publicity.