Uptake of artificial intelligence by in-house teams


Artificial intelligence (AI) is set to have a significant impact for businesses in almost every sector, including in-house legal teams. Susie Lunt looks at some of the ways in which legal teams are reaping the benefits of AI products, as well as what holds them back from increased uptake.

Today, ‘almost every company is a technology company’, says Lisandro Frene, Vice Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Richards Cardinal Tutzer Zabala & Zaefferer. ‘Far from being a tool or commodity, technology is now part of the core business of every company, including law firms’.

This technology includes AI, which clearly has a role in assisting in-house legal teams. However, the extent to which AI is being used in practice is debatable.

Bloomberg Law recently conducted a survey of almost 500 in-house and law firm practitioners and published the results in May 2019. Its survey suggests over half of law firms and legal departments do not use tools based on AI or machine learning.

Both regulatory challenges – or sometimes the lack of proper regulation, particularly in under-developed countries – and the lack of ‘digital culture’ can prove barriers to the use of AI.

Gerlind Wisskirchen, Vice Chair for Multinationals at the IBA Global Employment Institute (GEI) and a partner at CMS, is coordinating a GEI report on digitisation and its impact on the working world. The report will be published later in 2019 and will, among other issues, examine the role played by AI.

Wisskirchen says the dominant uses of algorithms in the legal field range from contract management to due diligence and discovery.

Contracts developed using AI are not yet being used, however, thanks to ‘legal uncertainty’, she says, although contracts built using AI could develop in the future. Meanwhile other prospective developments include ‘the whole of mergers and acquisitions falling under AI’, says Wisskirchen.

Facundo Perez Geist, an in-house lawyer and data privacy officer for Argentina and Uruguay at cosmetics giant L’Oreal, says digital technology is a ‘real revolution’ at his company and its top business priority. L’Oreal has invested in several incubators to support the growth of leading startups within the sector.

In 2019, L’Oreal it acquired augmented reality and AI company ModiFace. The acquisition marks the beginning of a second phase of digital transformation at the company, one focused on reinventing the ‘beauty experience’ through technologies such as voice, augmented reality and AI. ‘We believe that services will be the new entry doors to discover our brands and products’, says Perez Geist.

He says AI clearly represents a challenge for internal legal teams - from intellectual property issues to terms and conditions of use and the privacy policies involved.

According to Wisskirchen, uptake of AI by legal departments is also being driven by demands for efficiency and cost reduction, and the need to align with the business. Another factor is that organisations in general are increasing their use of software and AI.

Perez Geist concurs, and explains that within his organisation, he sees internal processes continually being improved to adapt to changes in digital technology. ‘Legal teams always work in advance to anticipate the needs that may arise from [digital] developments at the local or regional level’, he says. ‘In accordance with our code of ethics, we always seek to observe local and regional policies that may involve both consumers as well as all our stakeholders.’

In-house lawyer Alejandro Llosa of Accenture Ireland is the company’s legal lead for contracting and post-merger integration activities in Europe. Accenture’s vision is for its legal department to become the most technologically-enabled internal legal function in a professional services firm, he explains. 

 

‘The key driver is that the use of technology, including [AI] tools, is the way to continue growing in an environment where the need of agile, sophisticated and at-scale legal support is at the core of the business, where the number of legal professionals is limited’

Alejandro Llosa, in-house lawyer at Accenture Ireland

 

‘The key driver is that the use of technology, including [AI] tools, is the way to continue growing in an environment where the need of agile, sophisticated and at-scale legal support is at the core of the business, where the number of legal professionals is limited’, says Llosa. ‘We need our lawyers to focus on key and complex areas, working efficiently assisted by technology’.

Llosa says that technology is at the heart of the Accenture legal department’s way of working, and that, as such, AI is one of the tools it wields. An example is the department’s use of a smart internal chatbot, developed to assist legal professionals with data privacy matters.

Other tools include an intelligent contract search tool, embedded with analytics elements, smart identification of risky provisions and other functionalities. ‘We are also exploring software that will provide initial mark-ups of contracts based on AI capabilities’, says Llosa.

A key challenge is finding the right balance between technology and capabilities matching the needs of the company, the way it operates and its culture. ‘The solutions offered in the market still require a lot of customisation to be usable by a company’s legal department’, says Llosa. ‘That also requires important investments and the support of key internal stakeholders is crucial’.

While there are many benefits to in-house teams from the use of AI, general counsel should be aware that legal issues can arise from the use of such technology.
Sometimes these issues can involve AI’s compliance with existing laws, while other issues are caused when products outside of legal frameworks. ‘There is not yet a legal framework for smart contracts, so they are operating outside of the legal framework, which might be a problem [in] itself’, says Wisskirchen.

‘Ethical issues can play a role, such as with programme tools and hiring algorithms – there’s a risk that they might be programmed wrongly’, adds Wisskirchen. ‘Only time will prove whether they are better or more efficient than human beings.'