note1k2yfjnz39warjavuz28k0tdyc008azv70jqemxe9hrx59yjnjwzsm0h3v9 My thoughts and ideas for solutions (after an entire 30 minutes of thought): 1. Maybe these intel guys aren't as creepy or as bad of freedom-hating bureaucrats as I thought. I always believed the vast majority were doing their job as best they could. But now maybe I believe that a little more, even for the leadership that make some questionable, sometimes infuriating decisions from the outside (though the possibility of corruption via various forces is always there). 2. The national security risks of forcing the Intelligence Agencies to take their foot off the gas are probably not as bad as they'd like us to believe, or even believe themselves. TSA security theater comes to mind. Though, not to say the risks are trivial. 3. 📄.pdf - there is significant hope. Freedom Of Information requests are effective. There are ways to shine light into the secrecy. But the incentives are currently such that significant effort is required to do so. We should *reverse* that incentive structure; somehow reward the intelligence agencies for open sourcing as much information as they possibly can. Reward them at the individual level, and the organizational level. Then they, the ones with all the context, can balance strategic advantage with accountability. Currently they're only optimizing for strategic advantage. The FOIA is good on its own, but frames things in such a way as to make revelation the exception, not the default.
My current read of why the US government is full of such spooky bois: The stark truth of the often spooky and almost adversarial-feeling relationship of Americans with their own country's Intelligence Agencies is : That secrecy, even with respect to their own populace, is necessary to maintain intelligence and strategic advantage over rival nations. The intelligence agencies are entrusted with that power not because anyone elected them, but by the nature of their work. This creates a point of conflict between the sovereignty of the free people, their constitution and its amendments (see the 4th), and elected leaders on one hand, and the intelligence agencies that oversee their safety on the other. The 'puppet' perception of elected leaders vis a vis their apparent beholdness to shadowy intelligence agencies is a clear example. The seriousness of this conflict was muted until the Cold War, when the budget and sophistication of Intelligence Agencies reached never before seen levels, and then was reticent to surrender this perceived world advantage after the war ended. Freedom-seeking nations are currently in a time where the populace doesn't understand this conflict, Intelligence Agencies are scared to relinquish their power, for self-preservation reasons, but also because they can see the national risks that would follow, and elected leaders who understand the conflict don't feel it prudent to completely side with the full sovereignty of the populace (which would require them to 'blow the lid' on the secret nature of the nation's operation, or otherwise undermine its operation), for fear of the same national risks the Intelligence Agencies see, as well as self-preservation (ex. some think JFK was taking this route, and was gruesomely silenced by the Intelligence Agencies). So there's no clear path forward without very decisive action, so the story of the decades since the Cold War has been one of mission and scope creep of the Intelligence Agencies, no real architecture in place for how to balance secrecy, safety, and strategic advantage, with sovereignty and democratic accountability. will follow with my ideas for solutions
Sometimes to *see* clearly you have to *be* clearly first. Clean your room.
A glorious win for the good people of Nostr #breadhasarrived View quoted note →
note1n0jamfsr9ln09njjqs06ltc36nrwaa4qmcsyr43chdrvpa55szyqn5kqhv This made me think about the other side, how much more intelligence will there be from today (at the frontier), independent of compute required (within the same fundamental autoregressive transformer and pretraining + RL paradigm)? My gut says at least another OG GPT-4 to today's frontier jump (Grok-4/Gemini 2.5 Pro/Claude Sonnet 4/GPT-5) But I think a qualitative jump to really feeling like human-level intelligence will require something like a module within the network that pays attention to the coherence of a new "thought direction" WRT its own model of reality, not just next-token-prediction on steroids. (that qualitative jump would manifest as a huge hallucination reduction, the ability to zoom out and self-correct after mistakenly honing in on the wrong path, in short, the model feeling like it's responding from a place of operating from a real mental model of the world, as opposed to today's models, responding from a place of most-likely-next-token) And a neurology foray I made a few days ago leads to say "wow neurology and ML are starting to converge, and some of the computational models that neurologists are making strike me as believable substrates for this type of 'check thoughts against a causal world model' module I think is required for the next step in LLMs". That didn't really feel true to me a few years ago Examples of neurologist's recent models I looked at: TEM machine (entorihnal+hippocampus analogous), Spaun (spiking NN, surprisingly holistic macro brain model)" And recently HRM or Heirarchical Reasoning Models (did well on the ARC-AGI-1 benchmark, caused a bit of a stir) strike me as the latest thing closing the neurology/AI gap, and accomodating insights from both. Excited and kinda concerned for the next 5 years. This is the time to build AI tech that gives power to individuals! Otherwise we might be SOL, at the mercy of oligarchical whims...