The term "Gigawatt AI training cluster" refers to a data center capable of consuming one billion watts (1 GW) of power, a massive leap from the 300 MW used by xAI's Colossus 1, built in Memphis in 122 days with 200,000 H100/H200 GPUs.
Energy usage for Colossus II could range from 1 GW continuously, translating to 24 GWh daily, depending on operational efficiency and cooling demands, far exceeding the 2.4 GWh daily of Colossus 1.
Estimates suggest Colossus II might house at least 700,000 GPUs, based on speculation of B100 chips with 1,400 kW each, surpassing the 200,000 GPUs in Colossus 1.
Compared to other clusters, Colossus II is 5 times larger than Colossus 1 in compute power and outpaces all others by a wide margin... OpenAI's and Meta's planned gigawatt-scale are years from completion. xAI will be (and already is) miles ahead of the closest competitor by then.








