Pop goes the compute? Contracting data centres and navigating bubble trouble

Tuesday 24 March 2026

Jake Owen
Vinson & Elkins, London

In the second quarter of 2025 alone,[1] the seven big tech companies committed over $100bn towards artificial intelligence (AI) data centres, an amount exceeding the annual gross domestic product (GDP) of countries such as Croatia and Costa Rica.[2] Forecasts suggest that between 2025 and 2029, global capital deployment to AI data centres could reach $3tn.[3] The United States Energy Department estimates that data centres will account for an estimated 12 per cent of all electricity use by 2028.[4] The prevailing narrative suggests that this surge in investment is due to the explosive growth and interest in large language models (LLMs).[5] But hand-held case summaries and instant shopping lists are only part of the story. Rising demand and reliance on cloud-based solutions,[6] which remain a driver of computing demand, coupled with the strategic imperative to secure an advantageous position in harnessing AI’s greater potential use cases, have driven corporations to build at a pace that assumes sustained, exponential demand for high-density processing.

Whether that anticipated demand will materialise is the key question. In the science, technology, engineering and mathematics (STEM) areas, the knowledge extracted from human data is rapidly approaching a limit:[7] the majority of high-quality data sources – those that can actually train an agent – have either been consumed or are about to be. As such, many in the space are predicting that a new source of data will be required. A possible source is where AI agents continually learn from their own experience, that is, data that is generated by the agent interacting with its environment. Without this progression, AI may simply exist to be an echo chamber of existing human knowledge, which will lead to severe market overcapacity. If AI becomes a general feature of human life, it could establish its place as ‘the most important technology that enables the most new products and innovation and value creation in history’.[8] Computing demand is likely to accelerate even beyond today’s ambitious forecasts. However, as many are currently writing about, the mismatch between supply and demand for extensive computing could trigger a major correction resembling the post-dotcom crash of the early 2000s.

The opportunity for the industry lies in preparing for both potential scenarios. The choices made today in procurement strategy, contract design and risk allocation are likely to determine who thrives and who falls among the uncertainty.

Power and capacity

If the optimistic trajectory is realised, AI is likely to evolve into a general-purpose technology embedded in nearly every sector of the economy. A key potential check to this growth, apart from grid connectivity, is the mismatch between technological requirements and the capacity of existing data centre designs. This is particularly relevant as more facilities are being built for single-tenant use. Tenants are increasingly demanding enhanced power and cooling to support new generations of graphics processing units (GPUs).[9] For example, current GPUs designed for AI data centres can consume up to 200 kW per rack, over ten times the average, making upgrades to power and cooling infrastructure unavoidable.[10] Without the flexibility to adapt, tenants may choose not to renew leases for facilities that no longer meet their operational needs, exposing data centres to the risk of being under-occupied.

Adapting contracts for technological evolution

Traditional engineering, procurement and construction (EPC) and design & build models often struggle with variations triggered by evolving standards. A common challenge is determining whether requested upgrades fall within the scope of a valid variation.

Perhaps then, variation clauses should make explicit reference to upgrades that are likely to be required if AI growth accelerates. For example, parties could specifically address instances where an industry standard guideline is expected to evolve to accommodate higher cooling requirements than the existing design. Doing so would ensure that the variation was reasonably within the contemplation of both parties and is not materially different from the original contract. In addition, employers should consider updating their ‘Employer’s Requirements’ by making it clear that the project should be designed and constructed with the latest standards and building sufficient redundancy to contemplate innovative future expansions. However, there is an inherent tension between the commercial reality of an employer’s imperative to deliver the project to market swiftly while remaining abreast of technological advancements, and the fact that risk allocation in contracts is often determined by information available years prior. The contract clauses introduced to manage this often do so via the use of caps in the variation process to create a threshold (determined by contractor valuations of outstanding variations) which, once met, relieves the contractor of the obligation to agree to and implement a variation until a price for the variation is determined before commencing further critical variation work. This ends up creating a situation where a data centre developer may be held to ransom. If the employer does not agree, programme delay is all but unenforceable.

variation clauses should make explicit reference to upgrades that are likely to be required if AI growth accelerates

Where changes are material but do not fundamentally alter the nature of the data centre, such as the addition of an extra hall or an overhaul of the cooling system, these can usually be managed as variations under the main contract, provided the variation clause is drafted with sufficient foresight. However, issues are likely to arise when these upgrades have continual knock-on effects that result in multiple variations across the whole project. English case law, including Cobalt v HMRC,[11] indicates that only changes so fundamental as to create a different project altogether would require a new contract. So, to avoid the inefficiency and risk of multiple piecemeal variations as technology advances, parties may prefer to structure the contract to allow for anticipated upgrades through a single, comprehensive variation or, for larger expansions, through a framework agreement. It operates as an umbrella contract, setting out the overarching commercial and legal terms that will govern future works, including risk allocation, pricing and performance standards. In a situation where capacity requirements increase, each new phase of expansion (eg, 20 MW–30 MW) can be called off. These call-offs can inherit the pre-agreed terms with the existing contractor, ensuring consistency and avoiding the need to renegotiate the fundamentals. By contrast, incremental or minor upgrades, such as minor improvements or efficiency tweaks, will still be managed as variations under the existing main contract. This approach balances flexibility with certainty because the employer can pause, accelerate or reshape the project in response to market demand or regulatory change without being trapped by the limitations of a single, rigid construction contract.

Supply chain challenges

Industry research in 2023 found that 93 per cent of operators had experienced supply chain challenges, with the most common being power and cooling systems.[12] Common bottlenecks involve the procurement of:

• generators (especially smaller gas turbines and diesel gensets);

• switchgear and transformers;

• cooling equipment (computer room air conditioner (CRAC)/computer room air handler (CRAH) units and chillers);

• specialised batteries and uninterruptible power supplies (UPS) systems; and

• high-capacity cables and busbars.

Demand for AI-ready facilities has driven lead times for custom components to double or more.[13] Supply chain difficulties have constrained employers’ ability to scale up quickly enough to capture surging demand, leaving them exposed to lost revenue opportunities. One way to manage this is by combining early procurement with contractual flexibility. In cases where the contractor is responsible for the procurement, a pre-construction services agreement can be used to authorise the contractor to place binding orders for long-lead items, such as transformers, generators and chillers. Title to these items typically vests in the employer upon payment, with storage and insurance obligations clearly defined. Later, when demand materialises, the contract provides for the installation of such equipment at pre-agreed rates, avoiding delays and disputes. On the other hand, if the employer chooses to undertake the procurement itself, framework supply agreements provide a safety net. For instance, a framework agreement with multiple suppliers of critical components, such as GPUs, will enable a call-off from supplier A if supplier B slips, mitigating the risk of derailing the project by having to restart negotiations mid-project.

Innovations in construction materials

Beyond power and cooling, the materials used to build data centres are becoming a strategic lever. Hyperscale owners are now using bespoke, performance-based concretes that are tailored for each project. These concretes are not only helping to reduce carbon emissions but also making it easier to keep construction on schedule. By using AI to design the concrete mix, builders can achieve faster strength gains while using less cement, which is both more sustainable and cost-effective.[14] In parallel, the industry is starting to use three-dimensional (3D) printing for certain concrete parts of the building, both structural and non-structural. This technology can speed up the most critical activities, cut down on waste and make it easier to repeat successful designs across different sites. Together, these changes show a move away from buying standard, off-the-shelf materials towards using custom-made solutions. These innovations shift risk in ways contracts must address, particularly concerning intellectual property, data rights and performance-based specifications related to these digital concrete designs.

For employers, these innovations can mean more predictable construction timelines and the ability to meet sustainability targets on a large scale. For contractors and suppliers, however, this is a new and more complex area of competition, where expertise in materials science, digital technology and new building methods all come together, bringing with it new types of risk that must be managed in contracts.

Mitigating the risk of the bubble bursting

It is estimated that by 2030, employers will need to secure around $2tn to meet projected computing demand.[15] Employers typically fund only the initial development costs themselves, leaving the bulk of construction reliant on external financing. Most of this funding is from structured finance, where securities are backed by projected lease payments from tenants. Therefore, if a bubble bursts, leasebacked securities could lose their value, undermining project viability.

A contingency against mid-project cancellations is to adopt a staged procurement method, where a project would be broken into phases. For example:

• Stage 1: The process begins with a pre-construction services agreement, under which contractors are engaged to carry out design work and surveys.

• Stage 2: The parties might consider entering into an early works contract in respect of enabling works such as site preparation and connection to utilities.

Although this might take more time and effort, each stage would act as a decision gate, creating a pause before significant additional costs are sunk in case there is uncertainty as to the direction of the market.

Further, escrow arrangements could be layered into the structure. A payment escrow account can hold staged drawdowns of the contract sum, which are released only against contractual milestones. This could track the stages described above, so that if the project is cancelled midway, the contractor is paid for completed works and the employer recovers the balance. Compared to performance bonds, escrow accounts provide mutual protection for both the employer and contractor. In volatile markets, escrow offers a more balanced and transparent safeguard than bonds alone, but in practice, the two can be used together, where the bond acts as a backstop against outright default and escrow acts as a practical tool for managing the staged deployment of capital given the capital-heavy nature of escrows compared to performance bonds.

Another key contractual risk mitigation strategy involves negotiating a robust termination for convenience and suspension clauses. These clauses are critical in a volatile market because they provide a legal framework for pausing or cancelling projects if they are no longer commercially viable. From an employer’s perspective, the consequences of such a termination might be capped at demobilisation costs plus a modest percentage of lost profit. Contractors, however, are likely to seek to recover their full investment, including tendering costs and all losses resulting from the termination, potentially including consequential loss. Increasingly, contractors are also attempting to make termination rights contingent on the payment of outstanding variation sums.

In volatile markets, escrow offers a more balanced and transparent safeguard than bonds alone

Designing for adaptability and repurposing

In a burst bubble scenario, data centres risk facing a surplus of capacity relative to actual computing demand. Put simply, tenants will have little incentive to maintain leases for facilities that provide far more processing power than they require. An effective risk mitigation strategy is to implement a hybrid data centre model, combining an AI-optimised facility with a traditional on-premises data centre for general server workloads and cloud integration. Instead of committing the entire facility to AI workloads, employers can design contracts and technical systems that allow parts of the data centres to be repurposed between different uses. This means structuring the main EPC or design & build contract with variation clauses that permit the reconfiguration of halls into mixed-density or colocation spaces when utilisation drops. These variations should be tied to objective triggers, for instance, a sustained utilisation below 50 per cent for a defined period, and priced against a pre-agreed schedule of rates. This ensures that downsizing capacity can be done quickly and transparently while avoiding disputes.

For larger shifts, such as converting an entire wing of a facility into a cloud hub or repurposing space for enterprise colocation, a safer route might be to use modular construction. This would allow the employer to issue a new call-off contract for the repurposing of a project, resetting scope, risk allocation and pricing without stretching the original contract beyond recognition. This mitigates the legal risk of variation claims, while still giving the employer and operator the flexibility to adapt to market conditions.

Modular data centres consist of prefabricated elements built offsite and assembled onsite, offering a flexible alternative to traditional construction. Key components such as cooling and power systems can be customised, and employers can purchase modules in stages rather than committing to a full build upfront. This staged approach allows employers to match capacity to actual demand, reducing upfront capital expenditure and the risk of stranded assets if markets change. If demand slows, future module orders can be paused or cancelled without disrupting ongoing works; if demand rises, additional modules can be quickly deployed, avoiding the long lead times of conventional builds.

However, modular data centres are not suitable for every project. Extensive customisation or integration with existing hyperscale facilities can be technically challenging. Adding modules mid-project requires careful coordination because any defects or delivery delays can disrupt the overall schedule and raise complex questions about risk allocation. It is essential to clarify procurement responsibilities from the outset, specifying whether the contractor or employer is responsible for sourcing modules. Contracts should also address whether adding or removing modules counts as a variation, with clear terms for cost and time adjustments.

To manage these risks, construction contracts should explicitly reference modular data centres as pre-agreed options or permissible variations for technology upgrades, capacity changes or sustainability improvements. Embedding this flexibility in the contractual framework enables employers and operators to adapt to changing market conditions, while managing downside risks.

Conclusion

Ultimately, in a market defined by volatility as much as velocity, flexibility is the only durable edge. By embedding adaptive contracting, staged procurement and modular options from the outset, operators can capture the upside while containing the downside. The winners will be those who build for change, not for a single forecast.

The opinions expressed are those of the author and do not necessarily reflect the views of Vinson & Elkins or its clients. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

 

Notes

[1] Nick Lichtenberg, ‘Spending on AI Data Centers is so Massive that it’s Taken a Bigger Chunk of GDP Growth than Shopping – and it Could Crash the American Economy’ (Fortune, 6 August 2025) (original graph from Journalist X account) https://fortune.com/2025/08/06/data-center-artificial-intelligence-bubble-consumer-spending-economy/ accessed 27 January 2026.

[2] S&P Capital IQ data.

[3] Michael Dempsey, ‘What’s the Big Deal About AI Data Centres?’ (BBC News, 23 September 2025) www.bbc.co.uk/news/articles/ckg2ldpl9leo accessed 27 January 2026.

[4] Debra K Rubin and Johanna Knapschaefer, ‘Power Hungry: AI-Fueled Data Center Boom Sets Energy Delivery’s New Course’ (Engineering News-Record, 24 July 2025) www.enr.com/articles/61083-power-hungry-ai-fueled-data-center-boom-sets-energy-deliverys-new-course accessed 27 January 2026.

[5] See, eg, ChatGPT, Google’s Gemini and DeepSeek.

[6] Greg Macatee, ‘Navigating the AI Infrastructure Landscape’ (451 Alliance, 17 March 2025) https://blog.451alliance.com/navigating-the-ai-infrastructure-landscape/ accessed 27 January 2026.

[7] David Silver and Richard S Sutton, ‘Welcome to the Era of Experience’ (preprint chapter, forthcoming in Designing an Intelligence (MIT Press)) https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf accessed 14 October 2025.

[8] Lily Mae Lazarus, ‘It’s Not Just Sam Altman Warning About an AI Bubble. Now Mark Zuckerberg Says a ‘Collapse’ is “Definitely a Possibility”’ (Yahoo, 19 September 2025) https://finance.yahoo.com/news/not-just-sam-altman-warning-192543725.html accessed 27 January 2026.

[9] GPUs are chips built to run thousands of small calculations at once. First used for rendering images and video, this parallelism now accelerates data centre tasks like AI and data analysis beyond what a central processing unit (CPU) – optimised for fewer, sequential tasks – can manage. High-end GPUs draw significant power and generate heat, so they are typically deployed in clusters with robust cooling and networking.

[10] Moody’s Ratings, ‘Growing scale of new projects highlights overbuilds, tech risks amid booming demand’, p 5 https://events.moodys.com/2025-miu23930-energy-conference/growing-scale-of-new-projects-highlights-overbuild-tech-risks-amid-booming-demand accessed 27 January 2026.

[11] See Blue Circle Industries plc v Holland Dredging Co Ltd (1987) 37 BLR 40 and Cobalt Data Centre v HMRC [2024] UKSC 40.

[12] Informatech, 2024 State of the Data Centre Report, p 18 https://itchronicles.com/wp-content/uploads/2024/02/Data-Ctr-Report.pdf accessed 27 January 2026.

[13] Tom Dotan and Asa Fitch, ‘Why the AI Industry’s Thirst for New Data Centers Can’t Be Satisfied’ The Wall Street Journal (New York, 24 April 2024) www.wsj.com/tech/ai/why-the-ai-industrys-thirst-for-new-data-centers-cant-be-satisfied-93c7eff5?msockid=195fbda350db615206e5abf5516360e7 accessed 27 January 2026.

[14] See www.enr.com/articles/print/61140-amrize-meta-partner-on-low-carbon-concrete-for-minnesota-data-center accessed 27 January 2026.

[15] See www.moodys.com/web/en/us/insights/podcasts/behind-the-bonds/how-data-centers-are-defying-growth-risks.html accessed 27 January 2026.

Jake Owen is an associate at Vinson & Elkins in London. He can be contacted at jowen@velaw.com.