Data Centre

Liquid Cooling

Definition

Liquid cooling encompasses any data centre thermal management approach that uses liquid — typically water or a dielectric fluid — to remove heat from IT equipment. In GPU infrastructure, liquid cooling has moved from optional to essential as power densities exceed what air cooling can physically manage. A single DGX B200 system draws approximately 14.3kW, and a rack of such systems can reach 60-120kW. Air cooling becomes impractical above approximately 30kW per rack, making direct liquid cooling (DLC) the default for modern GPU deployments.

Technical Context

The two primary approaches are direct-to-chip (cold plate) cooling and immersion cooling. Direct-to-chip uses cold plates attached to GPU and CPU packages with facility water circulating through a coolant distribution unit (CDU). Immersion cooling submerges entire servers in dielectric fluid. Both require facility-level infrastructure: CDUs, piping, pumps, and heat rejection systems. The CDU market has seen rapid expansion — Mitsubishi, Panasonic, and DCX have all launched 1MW+ units. Rear-door heat exchangers serve as a hybrid approach for lower-density deployments.

Advisory Relevance

Cooling infrastructure is a critical evaluation point in both deployment advisory and due diligence. We assess whether data centre facilities have the mechanical infrastructure to support GPU workloads — many legacy facilities marketed as "AI-ready" lack adequate cooling capacity. The gap between marketed and actual cooling capability is a consistent finding in our technical assessments.

This glossary is maintained by Disintermediate as a reference for GPU infrastructure professionals, investors, and operators. Each entry reflects terminology as used in active advisory engagements and market intelligence work.

View all termsDiscuss this topic