Many people talk about how the AI sector is in a bubble, and maybe that is the case for some companies, however, they fail to understand that many of the companies with billions in CapEx are actually the government. The end goal of course is AI governance. Many portfolio managers, hedge funds and defense-aligned funds already know this and you can see it reflected in financial markets. So how much AI/High-Performance Computing would the Controllers need for AI governance? Think in three compute tiers (because that's how you'd actually run it at scale): 1. Edge sieve (cheap, everywhere): cameras/phones/routers/ATMs/industrial sensors run tiny models to tag, hash, and discard 99.9% of raw stream. 2. Regional fusion (medium, many): metro/co-lo sites run multi-modal "situation" models across streams (ID, location, payments, comms, logistics). 3. Central brains (heavy, few): national/ally clusters train & steer giant world-models + simulations; push down policies/weights. Back-of-the-envelope capacity (ballpark, not brochure math) - Population-scale monitoring target: suppose you want to continuously cover meaningful signals across ~8B people + critical infrastructure. After edge filtering you still ingest, say, 10–100 events/person/day (payments, travel gates, high-salience comms, checkpoints, high-risk Internet of Things). Call it 10¹¹–10¹² events/day into regional fusion. - Regional fusion inference: lightweight multi-modal models at ~1–10 Giga Floating Point Operations Per Second (GFLOPs)/event (post-edge). That's 10²⁰–10²¹ FLOPs/day ⇒ ~1–10 exaFLOPS (EFLOPs)/s sustained (exaflops/second) just for regional inference. - Central training & simulation: persistent fine-tuning of trillion-parameter world-models, policy Reinforcement Learning, counterfactual simulations. Realistically 10–100 EFLOPs/s peak (not sustained 24/7, but frequent). Plus a few EFLOPs/s for national-level inference/agentic planners. - Power footprint: today’s top AI Data Centers run 100–300 MW each. A governance-grade grid is ~50–150 sites at 100–300 MW = 5–30 GW facility power (Power Usage Effectiveness ~1.2), with bursts and redundancy. That's multiples of current hyperscale. That's far above what's broadly deployed now; not infinite, but constrained by power, High Bandwidth Memory (HBM), packaging, and grid plumbing, not by demand. Accelerator count (NVIDIA H100-ish equivalents): - Moderate regime: ~5M accelerators (IT power ~3–4 GW). - Hard regime: ~20M (IT ~14 GW). - Maximal "omnivision": ~50M (IT ~35 GW). These numbers are feasible only if you solve: High Bandwidth Memory (HBM) output, Chip on Wafer on Substrate (CoWoS)/System on Integrated Chip (SoIC) capacity, 2–3nm leading-edge, and multi-GW interconnect + cooling. CoWoS (Chip on Wafer on Substrate) and SoIC (System on Integrated Chip) capacity refer to the ability of semiconductor packaging technologies to integrate multiple chips and components into a single package, enhancing performance and efficiency. TL;DR 👇️ Takeaway: For AI governance under a low Gross Consent Product lens (stability > truth), demand for AI/HPC is effectively insatiable for a decade. The constraint is power + packaging + memory, not "use cases". More capacity yields broader context windows, deeper cross-domain fusion, faster simulation cycles — directly improving control quality. There is no natural upper bound until the grid and supply chain say "no". More context: https://controlplanecapital.com/p/why-the-controllers-use-public-companies https://controlplanecapital.com/p/state-embedded-investment-thesis https://controlplanecapital.com/p/gross-consent-product-make-informed https://controlplanecapital.com/p/filtering-state-embedded-companies https://controlplanecapital.com/p/public-facing-elites-using-myth-making https://controlplanecapital.com/p/short-selling-weaponized-against
One of the few things that excites me about AI videos is realistic, well-made simulations. E.g. imagine a realistic simulation of a world without Central Banks. Imagine a realistic simulation of a world in which the slaves organize a tax revolt and reject fiat slavery. We live in such a weird world that most people will never be able to break the programming unless they can visualize the problem and the solution. Basically all financial analysts track liquidity cycles, but none of them analyze why liquidity is cyclical (at least openly). Allegedly Thomas Jefferson did: - “If the American people ever allow private banks to control the issue of their currency first by inflation then by deflation the banks and corporations that will grow up around them will deprive the people of all property until their children wake up homeless on the continent their Fathers conquered... I believe that banking institutions are more dangerous to our liberties than standing armies... The issuing power should be taken from the banks and restored to the people to whom it properly belongs.” Richard Werner made a documentary (and wrote a book) about this ( Princes of the Yen - ), however, it didn't do a very good job of explicitly stating the problem and didn't identify the solution. The Problem: the Central Banks of all countries (that matter) coordinate liquidity. They cause inflation waves or crises to discipline leverage, herd behavior, migrate usage onto programmable rails, and re-select winners — without the blow-back of permanent financial repression. The only Solution: a mass tax revolt and rejection of fiat slavery. The only way this type of coordination (that you also saw during Covid) works is if we live under a One World Government. https://controlplanecapital.com/p/rivalry-between-countries-is-curated Are we going to be able to create some of these realistic AI simulations and get any type of distribution? There will probably be a very small time window.