The Power Density Problem
Data centres were designed for power densities of 3-10 kilowatts per rack. Enterprise servers;web servers, database servers, virtualisation hosts;fit comfortably within this envelope. A well-provisioned enterprise rack with 40 servers consumes 5-8kW in normal operation.
Current-generation GPU servers break this assumption completely. An NVIDIA GB200 NVL72 rack;a single 19-inch rack containing 72 B200 GPUs;consumes 120kW at full load. That is 15-40 times a conventional enterprise rack.
Deploying 32 such racks in a data centre requires 3.84MW of power;enough to supply roughly 3,000 typical UK homes. Most existing data centres cannot accommodate this. The electrical infrastructure;switchgear, PDUs, cabling;is not rated for these loads.
The cooling systems were not designed to remove 120kW per rack of heat. This has created a two-tier data centre market: legacy facilities that can host conventional IT, and purpose-built or significantly retrofitted facilities capable of hosting AI workloads. The two are not interchangeable.
Electrical Infrastructure for GPU Clusters
Delivering 120kW to a single rack requires 480V three-phase power at roughly 250 amperes per phase. Standard enterprise racks receive single-phase 32A circuits;approximately 7kW at 230V. The gap is substantial.
GPU-capable facilities must install high-density busways, high-amperage PDUs inside each rack, and switchgear scaled for megawatts rather than kilowatts. Cooling systems for a 3.84MW GPU deployment add approximately 20-30% overhead;so total facility power demand approaches 5MW. Utility connections at this scale require dedicated substation infrastructure.
Substation procurement and commissioning adds 12-24 months to the delivery timeline for new GPU-capable facilities. Power Purchase Agreements;long-term contracts buying electricity directly from generators;are common at this scale, typically offering 10-15% cost reduction versus spot market rates. For a detailed opex model across power configurations and geographies, speak to our advisory team at disintermediate.global/services.
Why Air Cooling Fails at This Density
Air cooling removes heat by flowing air over heatsinks and out of the rack. To remove 120kW of heat via air cooling, you need roughly 5,000 cubic feet per minute per rack delivered at 18°C supply temperature.
That volume requires fans drawing 3-5kW per rack in additional power, generates 90+ decibels at rack level, and demands cold aisle dimensions that become physically impractical at scale. ASHRAE's Class A4 specification;the highest for air cooling;is rated to 35kW per rack.
Current GPU clusters exceed this by a factor of 3-4. This is not a marginal exceedance;it is a fundamental incompatibility. NVIDIA's NVL72 rack specification explicitly requires liquid cooling. There is no air-cooling configuration.
Liquid Cooling Technologies
Three liquid cooling approaches are deployed in current AI data centres. Direct Liquid Cooling (DLC) mounts cold plates directly to CPUs and GPUs. Coolant circulates through the cold plates, absorbing heat, then passes to a heat exchanger transferring heat to building chilled water.
DLC removes 85-90% of rack heat via liquid; the remainder is handled by residual airflow. System cost runs £25,000-£35,000 per rack for DLC installation. Rear-door heat exchangers handle densities up to 30-40kW per rack;insufficient for current GPU loads but useful for mixed-use facilities.
Immersion cooling submerges entire servers in dielectric fluid, achieving the highest thermal efficiency at the cost of hardware accessibility and maintenance complexity. System cost runs £40,000-£65,000 per rack including tank, fluid, pumps, and filtration. Current NVIDIA NVL72 racks use DLC as standard.
PUE: The Efficiency Metric That Matters
Power Usage Effectiveness (PUE) measures how efficiently a data centre converts total power input into useful IT load. A PUE of 1.0 would be perfect.
A PUE of 2.0 means for every watt of IT load, another watt is consumed by cooling and facility systems. Traditional data centres run PUEs of 1.5-1.8.
Modern hyperscaler facilities achieve 1.1-1.2. GPU-optimised facilities with direct liquid cooling can reach 1.03-1.1.
The financial impact is substantial. A 10MW GPU facility at PUE 1.6 consumes 16MW of total power; at PUE 1.1, it consumes 11MW. Over a year at £0.08/kWh, that difference is £3.5M in annual electricity cost. Nordic locations (Iceland, Norway, Sweden) achieve naturally low PUE through cold ambient temperatures;some Scandinavian facilities achieve PUE of 1.02-1.05 year-round, a structural advantage over southern US or Middle Eastern facilities. For a detailed opex model comparing facility locations and cooling configurations, contact Disintermediate at disintermediate.global/contact.
Current GPU racks (NVL72) consume 120kW;15-40x standard enterprise density;making most existing data centres incompatible
Electrical infrastructure for AI clusters requires 480V three-phase circuits at 250A per rack; standard enterprise provisioning is 32A single-phase
ASHRAE air cooling specifications cap at 35kW per rack;current GPU racks exceed this 3-4x, making liquid cooling mandatory
DLC costs £25k-£35k per rack; immersion costs £40k-£65k; both are required for current-generation deployments
PUE at 1.1 versus 1.6 saves £3.5M/year on a 10MW facility;Nordic locations achieve structural PUE advantage through ambient temperature