Defending claims relating to the use of artificial intelligence

Tuesday 23 April 2024


Credit: thanapun/Adobe Stock

Rupert Sydenham
Hogan Lovells International, London
rupert.sydenham@hoganlovells.com

Tom Smith
Hogan Lovells International, London
tom.smith@hoganlovells.com

Annie Lund
Hogan Lovells International, London
annie.lund@hoganlovells.com

Elmina Marriot
Hogan Lovells International, London
elmina.marriot@hoganlovells.com

Introduction

From AI-powered hard hats[1] to humanoid labourers,[2] the potential for artificial intelligence (AI) to transform the construction industry seems vast. While most industry players have only just started exploring what AI has to offer, commentators such as the World Economic Forum[3] suggest that companies that fail to integrate AI into their businesses will suffer the consequences: a lack of competitiveness and, thus, profitability.

It therefore seems likely that the services provided by contractors and design professionals to employers will increasingly be carried out not just by humans, but also by AI-based systems, and that employers will increasingly seek AI-based solutions from their contractors and design professionals. Indeed, generative AI tools are already widely in use in many engineering settings, for example, for producing multiple alternative design options for specific components in accordance with pre-defined specifications and criteria.

However, as is often the case with novel technology, unforeseen issues with innovative AI-based systems and solutions seem likely to occur. And, as a consequence, disputes relating to AI are also likely. This then raises the question: what would a contractor or design professional need to do to successfully defend a claim arising out of its use of generative AI in a construction project?

In this article, we attempt to answer this question for two key types of claim where AI-related disputes could arise, namely: (1) an employer’s claim against a design professional for breach of its obligation to use reasonable skill and care in producing its design; and (2) an employer’s claim against a contractor for breach of its obligation to deliver a project fit for the employer’s purpose.

Having considered these claims, we go on to provide some practical recommendations for parties using generative AI for, or integrating generative AI into, their construction project deliverables. In short, all parties wishing to employ AI to deliver construction projects, or to provide AI-based solutions within the project deliverables, should take steps to generate documentation that records how such AI-based technology has been designed or used, and the steps the party has taken to make sure the technology has been designed or used appropriately.

We then consider other ways in which the use of AI on construction projects could affect claims or disputes between parties in the future, and their implications.

Reasonable skill and care

Imagine the following scenario: an employer (the ‘Employer’) hired an engineering firm (the ‘Design Professional’) to design the towers of a new offshore wind farm, with the contract between the parties providing that the Design Professional would carry out the design services with reasonable skill and care. The Design Professional then used a generative AI-powered tool to produce its design for the towers, and the towers were constructed and installed accordingly. However, after the wind farm has been operational for two years, cracks begin to appear in the towers.

In this scenario, the Employer may decide to bring a claim for damages against the Design Professional for breach of its obligation to exercise reasonable skill and care in producing its design for the towers. In defending itself against such a claim, the Design Professional may need to establish that its use of the generative AI-powered tool was not in breach of its obligation to act with reasonable skill and care. The question then is: what evidence does the Design Professional require to do this?

We suggest that the Design Professional may require evidence in some or all of the following categories:

1. Evidence regarding the suitability of the generative AI-powered tool for the project

The Design Professional may want to adduce evidence that it exercised reasonable skill and care in deciding that the generative AI-powered tool it used was suitable for the project in question. This might include evidence that before using the tool, the Design Professional:

• identified that the tool had been used for similar tasks on similar projects before, and that it had a track record of producing appropriate results for those projects (something that may be inherently difficult at the current stage of AI adoption, given the nascent state of many of these tools);

What would a contractor or design professional need to do to successfully defend a claim arising out of its use of generative AI in a construction project?

• verified that the tool had been trained on data that was appropriate for the project in question, perhaps including verifying that the data used to train the tool:

– came from appropriate sources;

– contained appropriate data points;

– was free from obvious errors; and

• confirmed that the information the Design Professional intended to input into the tool was appropriate to use, on the basis of the tool’s training and guidelines for use.

2. Evidence regarding the way in which the generative AI-powered tool was used during the project

The Design Professional may also want to adduce evidence that it exercised reasonable skill and care during its use of the generative AI-powered tool during the project. This might include evidence that the Design Professional:

• confirmed that the engineers who used the tool in question were appropriately experienced and/or trained to use it; and

• implemented appropriate human-led oversight processes[4] during use of the tool to increase the chances of the tool producing an appropriate design, for example:

– that its engineers followed the oversight processes recommended by the designer of the generative AI-powered tool, whether within the Design Professional’s organisation or outside it; and/or

– that it developed and implemented its own human-led oversight processes specific to the project in question to identify and eradicate potential errors within the tool’s output.

While case law has established that a professional is not, by default, acting unreasonably if another professional in the same field would not have acted in exactly the same way in the same scenario, industry common or best practices can be good evidence of what is considered an exercise of reasonable skill and care. The Design Professional may therefore also want to adduce evidence – perhaps expert evidence – that its use of the generative AI-powered tool was in accordance with industry standards as to the use of AI, particularly generative AI-based systems, if and to the extent such standards exist.

3. Evidence regarding the way the Design Professional used the generative AI-powered tool’s output(s)

The Design Professional may also want to adduce evidence that it used the output(s) from the generative AI-powered tool appropriately. For example, this might include evidence that the Design Professional:

• took steps to verify whether or not the design produced by the tool was adequately safe, for example by reference to appropriate engineering standards, and, if not, that it was revised either by humans or by the tool itself to resolve any issues identified; and/or

• if the tool produced multiple designs, interrogated each design adequately and chose which design to progress on an appropriate basis.

The party using an AI-based system in its delivery of a construction project should build up a bank of evidence that demonstrates it was appropriate to use the AI in question

The Design Professional may want to adduce evidence – again, perhaps expert evidence – that the output of the generative AI-powered tool accorded with other designs produced by other design professionals for similar projects.

Fitness for purpose

Imagine a related scenario: the Employer entered into a contract with a contractor (the ‘Contractor’), pursuant to which the Contractor was under an express fitness for purpose obligation to provide the Employer with an operating system for the wind farm that maximised the amount of electricity generated.

The Contractor chose to provide an operating system that used an AI algorithm to monitor the turbines’ performance and decide when to cease operations, for example: (i) to instigate scheduled maintenance outages with the aim of preventing unscheduled outages; and (ii) during unsafe windspeeds. However, after the wind farm has been operating for a year, the Employer becomes concerned that there have been unnecessary scheduled maintenance outages and that the turbines have ceased operations in windspeeds they could have safely operated in and that, accordingly, the amount of electricity the wind farm has generated is significantly less than ought to have been achieved.

In this scenario, the Employer may decide to bring a claim for damages against the Contractor, alleging that it is in breach of the fitness for purpose obligation to maximise electricity generation because more electricity would have been generated if the AI algorithm had taken a less risk-averse approach to both scheduled maintenance outages and windspeed.

In defending itself against this claim, the Contractor would, of course, need to establish that the AI-based operating system it provided to the Employer was fit for purpose in maximising electricity generation. Again, the question that arises is: what evidence would the Contractor require to do this?

Our assumption in the discussion about evidence that follows is that the Employer’s purpose of maximising electricity generation is not defined by reference to precise outputs, against which it would be relatively straightforward to measure the operating system’s performance.  On this assumption, we consider that the following categories of evidence would be useful:

4. Evidence regarding how the decisions of the AI-based system accorded with the Employer’s purpose

Ideally, the Contractor would want to be able to adduce evidence to explain:

• how the AI algorithm within the operating system made each of its decisions to instigate a maintenance outage and cease operations during high windspeeds; and

• how each such decision made by the system accorded with the Employer’s purpose in the relevant scenario, that is, why the AI algorithm was correct to decide that less electricity overall would have been generated had the wind farm continued operations without maintenance and/or in high windspeeds. Relevant considerations in the system’s decision-making might include the assessment of risk of malfunction or damage in the absence of maintenance or shut down, leading to potentially greater outage time overall.

The power of generative AI systems derives from their ability to analyse enormous quantities of data and to apply ‘learning’ derived from the data to produce decisions or outputs. However, a criticism levelled at current systems is that they cannot explain the basis of their decisions. Therefore, evidence of how an AI algorithm made its decisions, referred to above, may be hard to produce on current state-of-the-art AI. As a second-best alternative, if the Contractor cannot explain the operation of the AI algorithm and the system to this extent, the Contractor would probably instead want to adduce evidence to explain:

• the training objectives the AI algorithm was set during its design and development;

• why these training objectives were appropriate objectives to achieve the Employer’s purpose; that is, why they would achieve the Employer’s purpose of maximising electricity generation while having to cease operations during maintenance outages and/or high windspeeds; and

• how the decisions the AI algorithm made during the wind farm’s operation accorded with these training objectives.

The relevance or usefulness of such evidence may be limited if the contract stipulates precise outputs which the operating system has failed to achieve. However, on our assumption above, that is not the case. The evidence may be persuasive in support of a case that the operating system meets the purpose of maximising electricity generation. 

5. Evidence regarding how the AI algorithm’s decisions accorded with decisions of a non-AI-based operating system

The Contractor may also want to adduce evidence, if obtainable, that demonstrates that the decisions of its AI-based operating system were the same or superior to decisions an equivalent non-AI-based operating system would make. The Contractor might do this by comparing the Contractor’s operating system with the operation of other available systems, with the support of expert evidence.

Practical implications

As is clear from the analysis above, to defend against potential claims relating to the use of AI in construction projects, the party using an AI-based system or providing an AI-based solution will want to have evidence that demonstrates why any perceived limitations in the AI did not reflect failures in its contractual duties.

To do this, the party using an AI-based system in its delivery of a construction project should build up a bank of evidence during the course of the project that demonstrates:

• it was appropriate for the party to use the AI in question;

• appropriate steps were taken to oversee its use;

• appropriate controls were in place to manage the risks associated with its use; and

• its outputs were appropriate, and appropriately verified.

Similarly, the party providing an AI-based solution as part of its project deliverables should build up a bank of evidence that explains, to the extent possible, how and why its solution meets its client’s objectives.

These evidence banks could be generated in any number of ways. For example, regarding a party’s procurement and use of an AI-based system, the party could set out the due diligence it carried out prior to its procurement and record each stage of its use of the system in the minutes, project review meetings or similar, or through the creation of a specific internal governance tool that creates an electronic paper trail. Regarding a party’s development of an AI-based solution, the party could implement a monitoring and review procedure that records how the system was developed and tested from start to finish. Whatever form the evidence bank takes, the party in question should have at its disposal evidence that demonstrates a transparent and easily explainable process was followed.

Other potential areas of dispute

Of course, the question of how a party has used AI for or within a construction project will not only arise in the context of fitness for purpose and reasonable skill and care claims. There are numerous other categories of claims or disputes to which evidence of a party’s use of AI, or lack thereof, may be relevant. For example:

Duty to warn

As AI-based tools to monitor risks on construction sites develop in sophistication over time, they may become better than humans at predicting or identifying dangers. Accordingly, a professional consultant’s implied or express contractual duty to warn its employer about dangers of which it is aware, or ought to be aware, may extend to warning its employer about those dangers which a market standard AI tool would have predicted, even if the professional consultant had not used any such tool. Put another way, AI-based risk-monitoring tools may broaden the scope of a professional consultant’s actual or deemed knowledge of dangers on site.

It may be harder for a party to a construction contract to claim that the event or condition encountered was truly unforeseeable

By implication, this may broaden the grounds on which the professional consultant may be found to have breached its duty to warn and therefore makes it important for any professional consultant interacting with any AI-based tool to take care to understand what AI tools are available, how they work and what their limitations may be.

Cybersecurity and data security

As AI-based systems are increasingly used for, and integrated into, construction projects, malicious actors may be increasingly attracted to hacking into these systems in order to obtain valuable information as to how the project was constructed and/or is operated.

Construction companies may therefore see an increase in cyber and/or data security-related claims brought against them. If, for example, an AI-based system used on a construction project is hacked by a malicious actor, and a party, such as the employer or main contractor, suffers loss as a result through unavailability of certain services or loss of use of certain parts of the project, that party may seek to recover its losses from the party providing the AI-based system (or more likely, their insurers). The party suffering the loss may do this by bringing a contractual claim pursuant to express or implied terms within the contract between the parties relating to cyber and/or data security or the obligation to exercise reasonable skill and care in carrying out services, or potentially through a claim in negligence.

Moreover, national and international bodies are increasingly introducing legislation and other forms of law governing the use of AI-based systems within their jurisdictions. As a consequence, a party providing an AI-based system in breach of such laws may find itself facing statutory claims, as well as claims from its contractual counterparties for breach of the party’s contractual obligation to comply with applicable law.

Parties using AI-based systems will therefore need to ensure that they implement strong cybersecurity and data security protections in order to mitigate against the risk of an increase in claims or disputes.

Unforeseeable events/unforeseeable ground conditions

A party’s ability to obtain relief under the force majeure provisions of a construction contract for extreme weather events often relies on that party being able to prove that the event in question was unforeseeable. Similarly, a party’s ability to obtain relief under the extension of time provisions of a construction contract for unexpectedly difficult ground conditions often relies on that party being able to prove that the ground conditions encountered were unforeseeable.

If AI-based systems improve the ability of construction industry players to predict the possibility of extreme weather events or ground conditions, it may be harder for a party to a construction contract to claim that the event or condition encountered was truly unforeseeable, and therefore to obtain the contractual relief they seek – particularly if it is known that they are using these tools to improve their tendering position.

It may be, therefore, that contractors and sub-contractors will want to ‘price in’ the risk in their tender bids and timetables, so that such provisions become increasingly hard to rely upon.

Conclusion

The potential for generative AI to benefit the construction industry is enormous. However, its adoption also gives rise to the potential for disputes. As we have suggested in this article, there are prudent steps to be taken in advance of disputes arising to protect a party’s position when they do.

 

[1] Construction Management, ‘Costain and Winvic help develop AI hard hats’ (15 October 2019) https://constructionmanagement.co.uk/costain-and-winvic-help-develop-ai-hard-hats accessed 16 February 2024.

[2] AIST, ‘Development of a Humanoid Robot Prototype, HRP-5P, Capable of Heavy Labor’ (16 November 2018) www.aist.go.jp/aist_e/list/latest_research/2018/20181116/en20181116.html accessed 16 February 2024.

[3] World Economic Forum, ‘4 ways AI is revolutionising the construction industry’ (21 June 2023) http://www.weforum.org/agenda/2023/06/4-ways-ai-is-revolutionising-the-construction-industry accessed 23 January 2024.

[4] In these early stages of AI adoption, we suggest a Design Professional who provides no or very minimal human-led oversight over the operation of an AI-based system is unlikely to meet the standard of exercising reasonable skill and care, although this may change if and to the extent that certain AI tools develop a track record of producing reliable results.

Rupert Sydenham is a partner at Hogan Lovells International based in London and can be contacted at rupert.sydenham@hoganlovells.com.

Tom Smith is a partner at Hogan Lovells International based in London and can be contacted at tom.smith@hoganlovells.com.

Annie Lund is an associate at Hogan Lovells International based in London and can be contacted at annie.lund@hoganlovells.com.

Elmina Marriot is a trainee solicitor at Hogan Lovells International based in London and can be contacted at elmina.marriot@hoganlovells.com.