AI Workloads Are Rewriting the Rules of Data Centre Design
The rapid rise of artificial intelligence is putting unprecedented pressure on electricity grids worldwide, the International Energy Agency has warned. According to Schneider Electric’s White Paper 110, six key AI attributes are fundamentally changing data centre design: rack density, accelerator network communication, thermal design power (TDP), peak power behaviour, synchronous computation, and AI cluster size. Rack power densities have now surpassed 100 kW, and the trend shows no sign of slowing. Meanwhile, power procurement timelines stretch across years, and the shift from air to liquid cooling is no longer optional. In this new environment, data centre operators face challenges they were never designed to handle.
Liquid cooled GPU server heating up
The Six AI Attributes Reshaping Physical Infrastructure
Schneider Electric’s analysis identifies six trends that directly impact power, cooling, rack, and operational systems:
AI workloads - Training clusters demand distributed parallel processing, driving rack densities beyond 100 kW. Inference workloads vary widely, from single servers to multi‑rack configurations.
Accelerator network communication - To minimise costly, high‑latency inter‑rack fibre, designers pack as many GPUs as possible into each rack, further increasing density.
Rising TDP of accelerators - NVIDIA’s B200 SXM draws 1,000 W; the next‑generation Rubin architecture has reportedly been locked in at 2.3 kW per GPU.
Peak power of accelerators - GPUs can exceed their TDP multiple times per second, creating millisecond‑scale power transients.
Synchronous computation - Unlike traditional asynchronous workloads, AI training clusters see all servers peak simultaneously, producing step loads that can double total power demand almost instantaneously.
AI cluster size - Hundreds or thousands of high‑density racks operating in concert stress every layer of physical infrastructure.
The Australian Context:
A Nation at the Crossroads of AI Infrastructure
Australia’s data centre market is in the midst of an unprecedented transformation. The market is projected to surge from USD 4.22 billion in 2025 to USD 9.02 billion by 2031, growing at a compound annual rate of 13.50 per cent. Construction activity alone is forecast to reach AUD 6.24 billion by 2034, while enterprise and service provider spending on data centre systems is set to increase 22.5 per cent in 2026 to A$10.1 billion. The nation now hosts approximately 145 operational colocation data centres, most being developed to Tier III standards, with the wholesale and hyperscale segments leading revenue growth.
On the power front, built‑out data centre capacity is projected to more than double from 1,350 MW in 2024 to 3,100 MW by 2030. Some analysts forecast even higher: Morgan Stanley expects capacity to reach 3.2 GW by 2030. Data centres currently draw about 2 per cent of electricity from the National Grid, approximately 4 terawatt hours annually. The Australian Energy Market Operator (AEMO) expects that share to rise rapidly - growing 25 per cent year‑on‑year - to reach 12 TWh, or 6 per cent of grid demand, by 2030, and 12 per cent by 2050. In New South Wales and Victoria, where most facilities are concentrated, data centres could comprise 11 per cent and 8 per cent of each state’s electricity demand, respectively, by 2030.
This demand surge has prompted major grid reforms. The Australian Energy Market Commission (AEMC) finalised a comprehensive overhaul of technical connection requirements for the National Electricity Market, effective from August 2025, to better manage how large electricity users like data centres connect to the grid. Transmission company Ausnet has reportedly received load enquiries for more than 8 GW of data centre capacity, with some proposals exceeding the consumption of Australia’s aluminium smelters - the country’s largest energy users.
Investment announcements have reached historic levels. Amazon announced the largest technology investment in Australian history: AU$20 billion (US$13 billion) between 2025 and 2029, supported by three new solar farms in Victoria and Queensland. Microsoft is expanding its footprint with AU$5 billion to launch nine new data centres and a training academy. OpenAI signed a Memorandum of Understanding with NEXTDC in December 2025 to develop Australia’s first sovereign AI infrastructure partnership, centred on a 650 MW hyperscale AI campus in Western Sydney with an investment exceeding $7 billion. Infrastructure startup Firmus has raised $327 million to support “Project Southgate”, aiming to scale its capacity to 1.6 GW by 2028 across Tasmania, Melbourne, Sydney, Canberra and Perth. Collectively, major operators including Amazon, Microsoft, CDC and NextDC are set to invest more than $26 billion by the end of the decade.
Cooling is equally critical. NEXTDC’s M4 Fishermans Bend AI Factory in Melbourne integrates liquid cooling across 150 MW, allowing the campus to host training and inference nodes that approach 100 kW per rack. ResetData has launched liquid immersion‑cooled AI Factory data centres, making centres up to 10x more efficient than traditional sites while reducing emissions by up to 45% and lowering operating costs by 40%. Sydney has approved the Southern Hemisphere’s largest data centre - a $3.1 billion hyperscale facility at Marsden Park - targeting a Water Usage Effectiveness (WUE) measure of 0.01 through an air‑based cooling system that reuses chilled water.
Yet water scarcity looms large. The World Economic Forum estimates a one‑megawatt data centre can consume up to 25.5 million litres of water annually just for cooling. Some Sydney data centre developers are already requesting access to 40 million litres of water per day. Greater Western Water has been assessing applications requesting nearly 20 gigalitres of water a year - on par with the usage of 330,000 Melburnians. The peak water industry body has called for national water‑efficiency standards for new data centre developments.
Renewable energy procurement is also intensifying. Major operators and their big‑tech customers have collectively committed to matching their power use with 100 per cent renewable energy by 2030, primarily through power purchase agreements. AirTrunk is working on a 30 MW solar project in NSW with Google and a massive 200 MW project in Hong Kong for Microsoft. Google and AirTrunk have also signed a solar PPA with OX2 to add 25 MW of renewable energy to the Australian grid.
The push for sovereign AI capability is shaping the regulatory landscape. The Commonwealth unveiled its National AI Plan in December 2025, setting out a roadmap to drive development and adoption by bolstering investment in skills training and data centres. Both state and federal governments have announced their intention to position Victoria and Australia as a data centre hub for the Asia‑Pacific region. Federal rules requiring sensitive workloads to remain on‑shore have spurred Microsoft, AWS and Google to double down on Tier IV campuses with explicit data‑residency guarantees. The Reserve Bank of Australia is retiring the Bulk Electronic Clearing System by June 2030, moving 3.5 billion annual transactions to the always‑on New Payments Platform, which elevates uptime requirements to 99.995% - effectively making Tier IV (fault tolerance) the baseline for clearing houses and core banking stacks.
The Power Challenge: From Procurement to Protection
Data centre electricity consumption is set to more than double to around 945 TWh by 2030 — slightly more than Japan’s total electricity consumption today. In the United States, data centres account for nearly half of electricity demand growth between now and 2030, and by the end of the decade the country will consume more electricity for data centres than for the production of aluminium, steel, cement, chemicals and all other energy‑intensive goods combined.
AI‑focused accelerated servers are driving this increase. The IEA warns that “AI‑focused data centres can draw as much electricity as power‑intensive factories such as aluminium smelters, but they are much more geographically concentrated”. Nearly half of US data centre capacity is located in five regional clusters, raising local grid risks. Unless these risks are addressed, around 20% of planned data centre projects could be at risk of delays.
Grid connection queues are long and complex. According to Lawrence Berkeley National Laboratory, it now takes almost five years from interconnection request to commercial operations for a new power plant to interconnect with the grid. In the UK, Ofgem reports that contracted offers in the demand queue rose sharply from 41 GW to 125 GW by June 2025, compared with peak electricity use of just 45 GW.
Even after power is secured, the internal electrical system must handle peak loads, rapid step loads, and elevated arc‑flash hazards. Legacy 240/415VAC distribution often must be upgraded to higher voltages (800VDC), and power block sizes must increase to support multiple 100 kW racks.
The global scale of investment required is staggering. McKinsey projects that nearly US$7 trillion will need to be invested in global data centre infrastructure by 2030 to meet rising demand for AI. Global data centre capacity could triple by 2030, with 70% of demand coming from AI workloads. Capital spending on data centre infrastructure, excluding the IT hardware itself, will surpass $1.7 trillion by 2030. JLL characterises the current period as the early stages of a $3 trillion global infrastructure supercycle, with nearly 100 GW of new data centre capacity projected to come online between 2026 and 2030 - effectively doubling today’s installed base. AI workloads are forecast to represent 50 per cent of all data centre capacity by 2030, compared to approximately 25% in 2025, and a critical inflection point could come in 2027 when AI inference workloads overtake training as the dominant requirement.
The Cooling Transition: Liquid Is No Longer Optional
Air cooling can still handle higher TDPs in theory - but only with taller heatsinks, which reduce the number of GPUs per rack. Liquid cooling (direct‑to‑chip) solves this: cold plates under an inch tall allow far greater GPU density. However, liquid‑cooled servers impose stringent requirements on coolant temperature, flow, and chemistry. The lack of industry standards for interfaces, fluid properties, and integration drives complexity and cost.
The data centre liquid cooling market is witnessing robust expansion. It is projected to grow from $5.1 billion in 2025 to $16.16 billion in 2030, at a compound annual growth rate of 26 per cent. Major trends include the adoption of AI‑optimised liquid cooling, IoT‑based cooling monitoring, and expansion of sustainable cooling practices. Investment in data centres is propelling this market, driven by increasing demand for improved cooling efficiency, energy savings, scalability, and sustainability.
Racks: Built for Weight and Density
AI racks now exceed 1,300 kg and require static weight capacity over 2,270 kg. Standard 600 mm wide racks are inadequate; industry recommendations point to ≥750 mm width, ≥1,200 mm depth, and raised‑floor‑independent design. Structured cabling trays must accommodate dense network fabrics, and concrete slab floors must be validated for loads exceeding 3,000 kg.
Yet the vast majority of organisations today still do not support densities above 20 kW per rack, and the numbers have actually been on the decline in recent years. According to the latest Uptime Institute Global Data Center Survey, if hyperscalers were removed from the analysis, the density situation would be even worse. Four intersecting factors keep operators from achieving greater densities: poor server utilisation (average server utilisation is only 12-18%), comfort with the status quo, the difficulty of retrofitting, and added business risk.
However, the industry is moving. Rack densification is happening, just not at the pace many assume. In AI‑native data centres in 2026, per‑rack density has increased significantly from 20-40 kW in legacy deployments, and Goldman Sachs forecasts a 50 per cent projected increase in global power demand from data centres by 2027.
Software Tools, Digital Twins and Operational Risk
High‑density AI clusters leave little room for error. EPMS, DCIM, and digital twins are essential for capacity assessment, dynamic load management, and “what‑if” scenario planning. Electrical design software simplifies protection coordination, short‑circuit evaluation, and arc‑flash studies.
A digital twin of the entire IT space - including equipment and VMs in the racks — allows operators to validate power, cooling, and floor weight capacities before making changes. When capacity safety margins shrink, the risk of tripping a breaker, creating a hot spot, or stranding resources increases dramatically. Many DCIM planning and modelling software tools now include computational fluid dynamics (CFD) capabilities to deliver adequate airflow given the physical layout of equipment and heat load.
What’s Coming Next
Several emerging technologies will soon become mainstream:
Medium‑voltage distribution in the IT space - reducing copper, conductors, and installation time.
Solid‑state transformers and circuit breakers - smaller, lighter, faster‑opening protection that dramatically reduces arc‑flash energy.
Sustainable dielectric fluids - replacing water‑glycol mixes to address PFAS and GWP concerns.
Higher‑voltage racks - as densities move from 100 kW to 1 MW and beyond, distribution voltage must increase to accommodate required copper conductors.
Water‑free or water‑limited cooling - becoming the norm in water‑scarce regions.
Scheduling workloads based on grid conditions - migrating loads to different redundancy zones or placing a UPS on battery operation to help balance the grid and save on electricity.
This article draws on insights from Schneider Electric White Paper 110, “How 6 AI Attributes Change Data Center Design”, alongside analysis from the International Energy Agency, Lawrence Berkeley National Laboratory, Uptime Institute, McKinsey & Company, and JLL.
Read More: Our Latest Insights
AI infrastructure is evolving faster than ever. For a deeper look, explore our recent articles:
AI Training Boom - How surging demand for AI training capacity is reshaping facility planning.
Is Your Colocation Facility Ready for the AI Revolution? - Key considerations for colocation providers facing AI‑scale deployments.