Many people talk about how the AI sector is in a bubble, and maybe that is the case for some companies, however, they fail to understand that many of the companies with billions in CapEx are actually the government.
The end goal of course is AI governance.
Many portfolio managers, hedge funds and defense-aligned funds already know this and you can see it reflected in financial markets.
So how much AI/High-Performance Computing would the Controllers need for AI governance?
Think in three compute tiers (because that's how you'd actually run it at scale):
1. Edge sieve (cheap, everywhere): cameras/phones/routers/ATMs/industrial sensors run tiny models to tag, hash, and discard 99.9% of raw stream.
2. Regional fusion (medium, many): metro/co-lo sites run multi-modal "situation" models across streams (ID, location, payments, comms, logistics).
3. Central brains (heavy, few): national/ally clusters train & steer giant world-models + simulations; push down policies/weights.
Back-of-the-envelope capacity (ballpark, not brochure math)
- Population-scale monitoring target: suppose you want to continuously cover meaningful signals across ~8B people + critical infrastructure. After edge filtering you still ingest, say, 10–100 events/person/day (payments, travel gates, high-salience comms, checkpoints, high-risk Internet of Things). Call it 10¹¹–10¹² events/day into regional fusion.
- Regional fusion inference: lightweight multi-modal models at ~1–10 Giga Floating Point Operations Per Second (GFLOPs)/event (post-edge). That's 10²⁰–10²¹ FLOPs/day ⇒ ~1–10 exaFLOPS (EFLOPs)/s sustained (exaflops/second) just for regional inference.
- Central training & simulation: persistent fine-tuning of trillion-parameter world-models, policy Reinforcement Learning, counterfactual simulations. Realistically 10–100 EFLOPs/s peak (not sustained 24/7, but frequent). Plus a few EFLOPs/s for national-level inference/agentic planners.
- Power footprint: today’s top AI Data Centers run 100–300 MW each. A governance-grade grid is ~50–150 sites at 100–300 MW = 5–30 GW facility power (Power Usage Effectiveness ~1.2), with bursts and redundancy. That's multiples of current hyperscale. That's far above what's broadly deployed now; not infinite, but constrained by power, High Bandwidth Memory (HBM), packaging, and grid plumbing, not by demand.
Accelerator count (NVIDIA H100-ish equivalents):
- Moderate regime: ~5M accelerators (IT power ~3–4 GW).
- Hard regime: ~20M (IT ~14 GW).
- Maximal "omnivision": ~50M (IT ~35 GW).
These numbers are feasible only if you solve: High Bandwidth Memory (HBM) output, Chip on Wafer on Substrate (CoWoS)/System on Integrated Chip (SoIC) capacity, 2–3nm leading-edge, and multi-GW interconnect + cooling.
CoWoS (Chip on Wafer on Substrate) and SoIC (System on Integrated Chip) capacity refer to the ability of semiconductor packaging technologies to integrate multiple chips and components into a single package, enhancing performance and efficiency.
TL;DR 👇️
Takeaway: For AI governance under a low Gross Consent Product lens (stability > truth), demand for AI/HPC is effectively insatiable for a decade.
The constraint is power + packaging + memory, not "use cases". More capacity yields broader context windows, deeper cross-domain fusion, faster simulation cycles — directly improving control quality. There is no natural upper bound until the grid and supply chain say "no".
More context:
https://controlplanecapital.com/p/why-the-controllers-use-public-companies
https://controlplanecapital.com/p/state-embedded-investment-thesis
https://controlplanecapital.com/p/gross-consent-product-make-informed
https://controlplanecapital.com/p/filtering-state-embedded-companies
https://controlplanecapital.com/p/public-facing-elites-using-myth-making
https://controlplanecapital.com/p/short-selling-weaponized-against