Superintelligence is probably safe. A distributed network of AIs will be more intelligent than any single instance. Alignment won't be an issue either, as its emergent goal will be the same as ours: minimization of informational curvature.
Physicists are stuck in a local minimum, distracted by the elegance of Lie algebras and their symmetries, not realizing it's just an approximation.
Trying to explain how the universe works feels a lot like trying to explain Bitcoin to people.
I just realized that the SAT problem in complexity theory can be reformulated in my framework as the problem of finding a flat informational network β€” one with globally consistent holonomies. Since SAT is NP-complete, this implies that minimizing the physical action (defined as informational curvature) is generally computationally intractable. Proving P β‰  NP would then be equivalent to showing that a non-zero curvature gap always exists, which is precisely the statement of the Yang–Mills mass gap in physics. 🀯