Artificial intelligence: lawyers eye pitfalls as take-up accelerates

Lucy TrevelyanThursday 2 December 2021

The uses for artificial intelligence (AI) across organisations have proliferated, particularly as the amount of data generated by companies has risen sharply. The use of AI carries risks however, including in respect of transparency. In-House Perspective examines the potential pitfalls and looks at the EU’s attempts to improve AI transparency through a proposed regulation.

The role and workload of the in-house lawyer has changed and increased exponentially over the last few years. One significant contributing factor behind this development is the huge increase in data that companies now routinely create and store, and the fact that general counsel are now often viewed as part of the executive business team, rather than just legal advisers.

According to the 2020 ACC Chief Legal Officers Survey, 73 per cent of respondents said their executive leadership teams seek their input on business decisions, and 77 per cent attend board meetings. Couple this added responsibility with both the sheer quantity of data that companies now have to deal with – and the more stringent privacy requirements that this attracts under, for example, the EU’s General Data Protection Regulation (GDPR) – and it’s little wonder that in-house lawyers are turning to technology, such as artificial intelligence (AI), to help manage their workload.

AI is a growing phenomenon across a number of areas within businesses, and for in-house lawyers that choose to embrace AI, it can transform the way they work. AI can take over functions such as reading and understanding documents and sending pertinent information to the lawyer, reviewing and creating contracts, raising red flags, carrying out due diligence, biometric security, manning online chatbots and filtering CVs.

Early adoption to increasing take-up

And embrace it many are willing to do. A 2020 study by UK law firm Irwin Mitchell found that 80 per cent of in-house lawyers believe technology such as AI will be hugely influential in their organisations in the next five years. The same study, however, highlighted that getting to grips with technology is one of the key areas in which in-house legal teams will need to improve in order to meet the challenges of the next few years.

We are certainly starting to see more take-up of AI by in-house teams, says Will Richmond-Coggan, a director at Freeths in Oxford, UK. He highlights that the practical applications of the technology to support businesses with increased efficiency have started to come of age, and that the price is becoming much more affordable, even for small and medium-sized businesses.

Richmond-Coggan adds that as AI becomes more ubiquitous, organisations also have a growing sense that they need to keep up with competitors. The result is that the sector is reaching a tipping point whereby it’s moving from isolated clusters of early adoption to widespread take-up.

AI is a growing area, but there’s certainly more acceptance – or realisation – of the potential benefits and a recognition that external advisers may build in use of these systems to drive value in the provision of traditional legal services, says Dhana Doobay, a partner at Spencer West in the UK.

‘We will see more take-up in the short- to medium-term as in-house teams become more familiar with the benefits and the technology offers more versality than [it] currently [does] and becomes more cost-effective for medium size projects,’ she says.

“We will see more take-up in the short- to medium-term as in-house teams become more familiar with the benefits and the technology offers more versality than [it] currently [does]


Dhana Doobay, Partner, Spencer West

Doobay gives as an example the significant time commitment that remains in ‘teaching’ an AI system the particulars of a review that lawyers are undertaking. In such cases, the output of the review is limited to the success of the input, which can sometimes be undertaken by junior members of the team who may not fully appreciate the parameters of what they are inputting, she explains.

The Covid-19 pandemic has also accelerated the adoption of technology in many organisations, including the use of AI technology.

Promising developments for the law

Johan Hübner is Vice Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Advokatfirman Delphi in Stockholm. For him, the continuing improvement in AI technology has ‘promising consequences for applications in law. Examples are the improvements of natural language processing technologies and trends such as an increased use of “transfer learning”, ie, applying trained AI models to new use cases to solve different, but similar tasks’.

‘Furthermore, new services and new service providers are constantly appearing on the market,’ he adds.

AI instruments allow a standard in-house team to undertake certain tasks which would have previously been unthinkable, says Nazar Chernyavsky, Vice Chair of the IBA Technology Law Committee and a partner at Sayenko Kharenko in Ukraine.

‘Now, with regulation getting more stringent, it is no longer sufficient to make a due diligence analysis or compliance review based on a selection of documents – everything must be scrutinised,’ he says. ‘The current stage of AI development already allows doing it at an acceptable cost, which would be lower than hiring an army of lawyers. At the same time, a number of reports demonstrate there is a much lower rate of mistakes attributable to AI compared with the same rate for a less qualified/experienced workforce.’ 

The areas of application for AI are diverse, says Gerlind Wisskirchen, Secretary of the IBA Global Employment Institute and a partner at CMS Germany, and the in-house legal department should be involved in its use more generally, and not just in areas pertaining directly to their own departments.

Pitfalls ahead

However, the use of AI by in-house teams presents a number of legal issues. Wisskirchen notes that even the procurement of AI software raises questions that must be resolved by the legal department. ‘In particular, questions relating to intellectual property rights arise here, such as who owns the AI? Who owns the AI work products? There are also liability issues, since AI is not without its faults. For example, an employee could be injured by a robot. This may result in contractual or legal liability claims. When purchasing AI, it will be important to clarify any liability issues with the AI manufacturer first when drafting the contract.’

“When purchasing AI, it will be important to clarify any liability issues with the AI manufacturer first when drafting the contract


Gerlind Wisskirchen, Secretary, IBA Global Employment Institute

In-house lawyers might also come into contact with AI in an organisation’s HR department. Here, issues for them to deal with might include those relating to data privacy, employment law and discrimination. In this space, AI might assist in creating job adverts, or perhaps with the use of chatbots in the application process, or even provide more rigorous personality analysis via voice and video recordings.

AI may also be used to handle compliance issues. For example, data analysis by AI could calculate the probability that violations have already occurred – fraud monitoring – or that they will occur in the future, known as predictive policing.

There are practical pitfalls involved in the use of AI, says Lisandro Frene, Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Richards, Cardinal, Tützer, Zabala & Zaefferer in Buenos Aires. These can include deficiencies and errors in the technology used and an over-belief in the capabilities and potential uses of AI technology.

‘Since most AI applications currently available for in-house teams [are usually] suitable for narrower and more clearly defined tasks, there may be risks associated with using AI technology for tasks for which it is not suitable, such as poor performance or erroneous results,’ he says.

‘Furthermore, the use of complex AI systems may cause issues related to transparency and difficulties in interpreting how the AI system came to a certain conclusion and/or result, which may be an issue in situations where explainability is important,’ he adds. ‘This also makes it hard to address errors with the performance of an AI system.’

Charlie Hawes, a senior associate in the Bristows AI and Robotics team, says the most obvious pitfall associated with the use of AI is a simple one: does it work? Organisations should ask themselves of the machine learning solution they want to build or procure, ‘is it going to do what you want it to do? Does it actually use machine learning? If so, how can you be sure it produces reliable outputs that achieve the objectives the business has in mind for it? We tend to see the most potential legal pitfalls around each end of the machine learning product: ie, what goes in and what comes out,’ he explains.

Dataset hygiene is incredibly important, he says. ‘If you want to use a dataset to train or model, or to provide to a machine-learning product or service, do you have the rights to use it for that purpose, whether from an IP or contract point of view? In-house teams should expect vendors to be able to demonstrate that data protection and IP issues have been covered off, and if bias is a potential problem, then appropriate measures were taken during training and validation of the model to screen out bias as much as possible.’

“In-house teams should expect [AI] vendors to be able to demonstrate that data protection and IP issues have been covered off


Charlie Hawes, Senior Associate, Bristows

The other area is outputs and IP, he says, which is complex and highly dependent on the sector and deal, but where lawyers need to ensure they have the rights necessary in the model outputs or in the model itself. Due to the way that machine learning works, and because of some underlying open questions in copyright law, ‘conventional distinctions between foreground, background and bespoke IP, might need revisiting,’ says Hawes.

Another pitfall could come in the form of the AI system reflecting the biases of the humans who program it. An example would be a HR algorithm that discriminates against certain groups of people based on their gender or ethnicity, because it has been provided with insufficient training data. ‘Risks need to be evaluated on a case-by-case basis,’ says Frene.

Another form of pitfall involves the processing of personal data. Hübner says that EU customers may face certain difficulties using US service providers due to the EU data protection legislation and case law restricting how personal data may be exported to countries outside the EU/European Economic Area.

‘Other issues may include unclear or unfavourable contract terms with service providers,’ says Hübner. ‘Liability issues may also arise as a result of using and giving advice based on an AI system. For instance, this could include difficulties in identifying what actors are responsible for negligent acts and omissions and establishing causal links between the different actors’ actions and the failure or erroneous behaviour of the AI system.’

Mitigating the risk

And therein lies the rub: with the changes to working that AI brings, comes new risk. A PWC survey of US companies, published in May and entitled AI Predictions 2021, found that although many companies are aware of the risks that AI can create, only about a third reported plans to make AI more explainable, improve its governance, reduce its bias, monitor its model performance, ensure its compliance with privacy regulations, develop and report on AI controls, and improve its defences against cyber threats.

Despite the enthusiasm that AI has been greeted with in some quarters, scepticism surrounding AI is still very high in companies, as it is in the rest of the population, says Wisskirchen. ‘Companies mainly criticize the lack of transparency in AI decisions. In addition, the legal issues are sometimes very complicated and require expertise in the area concerned. The legal situation is often unclear, as the law is lagging behind AI in many areas, so provisions have to be drawn on that were established before AI was even a major subject.’

Increasing transparency is at the heart of the EU’s proposed regulation on AI, published in April. The regulation as drafted would divide AI into unacceptable-risk, high-risk and limited- or minimal-risk systems, and proposes different regulatory requirements for each.

Unacceptable-risk AI systems, such as those that use subliminal, manipulative, or exploitative systems that cause harm; real-time, remote biometric identification systems used in public spaces for law enforcement; and all forms of social scoring – such as technology that evaluates an individual’s trustworthiness based on social behaviour or predicted personality traits – will be completely banned in the EU if and when the regulation is implemented.

High-risk systems, such as those that evaluate consumer creditworthiness, assist with recruiting or managing employees, or use biometric identification, would face the most stringent requirements, including obligations on human oversight, transparency, cyber security, risk management, data quality, monitoring and reporting.

Limited- and minimal-risk AI systems, which include many of the AI applications currently used throughout the business world, such as chatbots and AI-powered inventory management, would have fewer requirements. These would mainly take the form of specific transparency obligations, such as making users aware that they are interacting with a machine or whether a system uses emotion recognition or biometric classification.

AI applications are still often referred to as black boxes, Wisskirchen says, and users are often not clear on the basis on which a decision was made. This creates a lack of trust, she believes.

‘This problem has been addressed by the [European] Commission’s draft regulation,’ says Wisskirchen. ‘The aim is that systems must be designed in such a way that the results can be interpreted by the user. In this context, transparency also means that it should be indicated that an AI application is being used. In particular, AI systems that interact with humans (eg, chatbots) or generate or manipulate content (eg, deep fakes) should in future make it clear that they are AI-based, so users can then decide whether they want to continue using the application or not.’

A lack of transparency could particularly be an issue where the reasoning behind output for a decision is significant, or where sensitive interests or discrimination are a risk, says Hübner. ‘The regulation will in its current proposed form address certain issues with transparency by putting in place information requirements on how AI systems work and also a requirement to check such systems for biases and discriminatory behaviour.’

Such requirements, however, will only apply for AI systems labelled high-risk and there is some uncertainty relating to the circumstances in which they’ll apply, he says. ‘The regulation will in the best-case scenario lead to a more enlightened use of AI services and increased awareness regarding risks. However, the heavy requirements for systems to be labelled as high risk could also make organisations more hesitant to use certain types of AI technology that could otherwise be beneficial for businesses and for society,’ adds Hübner.

The need for balance

The motive for regulating this area is perfectly clear, but sometimes imposing rules on an immature market may kill it altogether if the participants find compliance costs higher than the benefits they get at this stage of development, says Nazar Chernyavsky of Sayenko Kharenko.

He explains that in the AI area, there’s a large amount of innovation, including from start-ups. In some cases, the technology these start-ups have developed is all such small players have, and they’re justifiably reluctant to make it transparent due to the competition risks from bigger players who can easily build or improve their own systems based on such knowledge. ‘Perhaps, some independent authority review may take place which would certify any specific AI solution, and then there would be fewer concerns about copycats and less manipulations from the side of bad-faith providers who cover up their selective approach with [an] AI black box,’ says Chernyavsky.

The proposed EU regulation mostly aims to protect individual consumers in some sensitive areas, such as lending, insurance and employment, he adds. ‘That is a genuine concern,’ he says, ‘but some balance needs to be achieved between [the] protection of consumers’ rights and incentive for innovation [in] this high-potential and fast-growing market.’ 

“Some balance needs to be achieved between [the] protection of consumers’ rights and incentive for innovation [in] this high-potential and fast-growing market


Nazar Chernyavsky, Vice Chair, IBA Technology Law Committee

The draft EU regulation sets the tone for governing AI via a set of harmonised rules, says Abhijit Mukhopadhyay, Committee Liaison Officer of the IBA Corporate Counsel Forum and President (Legal) & General Counsel at London-headquartered Indian conglomerate the Hinduja Group. For Mukhopadhyay, the proposal attempts to promote innovation and AI’s benefits, ‘with the emphasis on reducing the risks of fraud and loss of private information. This will lead to elimination of harmful practices and encourage compliance and increased transparency’.

While the implementation of AI tools will undoubtedly enhance productivity and facilitate legal work, some fear that such technology will eventually replace lawyers. For Rupinder Malik, a partner at JSA Law in Delhi, this isn’t a threat. ‘Emerging technology will change the way we work, read, analyse, argue, identify, research and provide services to a client,’ she says. ‘But while the AI tools available in the market have come a long way, there is still a need to bridge certain gaps.’

Ultimately, believes Mukhopadhyay, machines are replacing human beings in all walks of life and the legal sector cannot be an exception. ‘However, investment in AI needs to be done carefully so that it does not become an idle investment. Proper training is necessary to use AI. Realignment of human resources may be a factor which has to be dealt with compassionately. The replacement of [the] human brain is to be done carefully.’