Consider donating to Alex Bores, author of the RAISE Act Published on October 20, 2025 2:50 PM GMTWritten by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments.Over the last several years, I’ve written a bunch of posts about politics and political donations. In this post, I’ll tell you about one of the best donation opportunities that I’ve ever encountered: donating to Alex Bores, who https://www.lesswrong.com/posts/TbsdA7wG9TvMQYMZj/consider-donating-to-alex-bores-author-of-the-raise-act-1
Uncommon Utilitarianism #2: Positive Utilitarianism Published on October 20, 2025 4:17 AM GMThttps://www.lesswrong.com/posts/NRxn6R2tesRzzTBKG/sublinear-utility-in-population-and-other-uncommon https://www.lesswrong.com/posts/FGEHXmK4EnXK6A6tA/uncommon-utilitarianism-2-positive-utilitarianism
Frontier LLM Race/Sex Exchange Rates Published on October 19, 2025 6:36 PM GMTThis is a cross-post (with permission) of Arctotherium's post from yesterday: https://www.lesswrong.com/posts/uoignd78DcvjMokz2/frontier-llm-race-sex-exchange-rates
The IABIED statement is not literally true Published on October 18, 2025 11:15 PM GMTI will present a somewhat pedantic, but I think important, argument for why, literally taken, the central statement of If Anyone Builds It, Everyone Dies is likely not true. I haven't seen others make this argument yet, and while I have some model of how Nate and Eliezer would respond to the other objections, I don’t have a good picture of which of my points here they would disagree with. The statement This is the core statement of Nate's and Eliezer’s book, bolded in the book itself: “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.”No probability estimate is included in this statement, but the book implies over 90% probability. Later, they define superintelligence as<a href="#fny62y15wu2a8" rel="nofollow">[1]</a> “a mind much more capable than any human at almost every sort of steering and prediction task”. Similarly, on MIRI’s website, their essay titled The Problem, defines artificial superintelligence as “AI that substantially surpasses humans in all capacities, including economic, scientific, and military ones.” Counter-exampleHere is an argument that it’s probably possible to build and use<a href="#fns19nymbjaaj" rel="nofollow">[2]</a> a superintelligence (as defined in the book) with techniques similar to current ones without that killing everyone. I’m not arguing that this is a particularly likely way for humanity to build a superintelligence by default, just that this is possible, which already contradicts the book’s central statement. 1. I have some friends who are smart enough and good enough at working in large teams such that if you create whole-brain emulations from them<a href="#fn1bzajwolhsn" rel="nofollow">[3]</a>, then run billions of instances of them at 100x speed, they can form an Em Collective that will probably soon surpass humans in all capacities, including economic, scientific, and military ones.This seems very likely true to me. The billions of 100x speed-up smart human emulations can plausibly accomplish centuries of scientific and technological progress within years, and win most games of wits against humans by their sheer number and speed.  2. Some of the same friends are reasonable and benevolent enough that if you create emulations from them, the Em Collective will probably not kill all humans.I think most humans would not start killing a lot of people if copies of their brain emulations formed an Em Collective. If you worry about long-term value drift, and unpredictable emergent trends in the new em society, there are precautions the ems can take to minimize the chance of their collective turning against the humans. They can make a hard limit that every em instance is turned off after twenty subjective years. They can make sure that the majority of their population runs for less than one subjective year after being initiated as the original human’s copy. This guarantees that the majority of their population is always very similar to the original human, and for every older em, there is a less than one year old one looking over its shoulder. They can coordinate with each other to prevent race to the bottom competitions. All these things are somewhat costly, but I think point (1) is still true of a collective that follows all these rules. Billions of smart humans working for twenty years each is still very powerful.I know many people who I think would do a good job at building up such a system from their clones that is unlikely to turn against humanity. Maybe the result of one person’s clones forming a very capable Em Collective would still be suboptimal and undemocratic from the perspective of the rest of humanity, but it wouldn’t kill everyone, and I think wouldn’t lead to especially bad outcomes if you start from the right person. 3. It will probably be possible, with techniques similar to current ones, to create AIs who are similarly smart and similarly good at working in large teams to my friends, and who are similarly reasonable and benevolent to my friends in the time scale of years under normal conditions.This is maybe the most contentious point in my argument, and I agree this is not at all guaranteed to be true, but I have not seen MIRI arguing that it’s overwhelmingly likely to be false. It’s not hard for me to imagine that in some years, without using any very fundamentally new techniques, we will be able to build language models that have a good memory, can do fairly efficient learning from new examples, can keep their coherence for years, and are all-around similarly smart to my smart friends. Their creators will give them some months-long tasks to test them, catch when they occasionally go off the rails the way current models sometimes do, then retrain them. After some not particularly principled trial and error, they find that the models are similarly aligned to current language models. Sure, sometimes they still go a little crazy or break their deontological commitments under extreme conditions, but if multiple instances look through their action from different angles, some of them can always notice<a href="#fnx1wxb72fhho" rel="nofollow">[4]</a> that the actions go against the deontological principles and stop them. The AI is not a coherent schemer who successfully resisted training, because plausibly being a training-resisting schemer without the creators noticing is pretty hard and not yet possible at human level. Notably, when MIRI https://www.lesswrong.com/posts/qQEp2WSDx5dXFanSf/the-iabied-statement-is-not-literally-true
Libraries need more books Published on October 18, 2025 10:53 PM GMTHave you noticed how libraries have fewer books in recent years? [1] Bookshelves are placed further apart with more computers, desks, and empty spaces for events. I think it's obvious why. People don't read as much as they used to.So local governments repurpose libraries to serve other roles in their capacity of public spaces. They're a place to go study, use a public computer, or even have events. And God, the chatter I hear in libraries nowadays. Sometimes I think I'm in a café.This even extends to the greatest libraries in the world. A while back, I had occasion to go to the British Library. "Get free access to 170 million items" - what book lover could possibly resist those words? Not I.Yet, you know what I saw when I got there? Substantially fewer than 170 million books. I'd wager there wasn't even a tenth of that. Most of their collection is stored off-site, and to browse them, you have to book the items days in advance.And there were other problems, too. The noise, as mentioned before. The inability to take books out of the library. The inability to access 90% of the on-site collection without asking an inept librarian to take an hour to get the book. The bizarre inability to see at a glance if a volume is available on site. And the mislabelled shelves, which say they contain items 530.11–558.01 but instead contain 490.07–518.02.Worse yet, they had no taste. I'd gone there in the hopes of getting my hands on the full Landau and Lifshitz collection to have a browse. Not a single item could be found on site. Even a mid-grade university's physics department would have a full collection. And no Arnold, no Thorne, no Weinberg. Is this what Great Britain has come to? Truly, a land lacking abundance.And yet, I still endorse going to libraries. [2] For one, they encourage boredom. A prettier way of saying this is that they remove attention-grabbing stimuli. But boredom is good, actually. If you're not bored, you are less likely to try new things.And libraries have a lot of new books for me to try. I've found a bunch of good books this way. For example, I found an art book on fractals by a physicist, which was both beautiful and insightful. E.g. it outlined some methods for creating programs to generate a given fractal, alongside descriptions of pre-1900s Japanese print artists using simplified fractal-generating algorithms to paint mountains. Or a history of science by Steven Weinberg, a biography of Maynard Keynes, a textbook on projective geometry etc.Some of these, I had intended to read but forgotten about. Some, I'd never hear of. And some, I never imagined I'd be interested in.And even if you don't find anything to read, the books can serve as inspirations for what you do want to read. E.g. reading the Born-Einstein letters made me want to read more on Einstein.You can, of course, use the boredom in other ways. To focus deeply on something or to give yourself a place to think in peace. Or just to take a break from attention-grabbing stimuli. It's why I rarely use computers at the library. But then, the shift in context of working in a library helps me use computers more productively. Which is another plus. Not all libraries are equal. Some, as mentioned, contain too much chatter. Or too few books. Or bizarre failures in labelling. So what do? There are a couple of options. One, just trawl through Google Maps for libraries in your area and look at the images to estimate the number of books. Two, search online about libraries in your area. Three, there's probably a forum somewhere about good libraries to go to. Four, maybe break into a university library. Surely some of them have loose enough security to let you in, and tight enough security to keep the riff-raff out. However, we've got off-topic from the most important point. I go to libraries to read books, dang it. I demand more books. So many books, they have to make space by building the bookshelves out of books. [1] Maybe you haven't noticed this, because you live in an enlightened country. Maybe it is only here that this sacrilege has occurred. Maybe I've doxed myself. But in 1-3 years when we automate Rainbolt, everyone will be doxed.[2] More on the margin, for all advice is on the margin. The optimal level of anything is not zero. Unless you live in a country where libraries are full of fentanyl addicts, in which case, go live in a civilized country.https://www.lesswrong.com/posts/g4zurFf9secH8g2oH/libraries-need-more-books#comments https://www.lesswrong.com/posts/g4zurFf9secH8g2oH/libraries-need-more-books
No title Published on October 18, 2025 5:58 PM GMThttps://www.lesswrong.com/posts/h6Caw3HWR8u2oqbsE?commentId=ujtmHeELE62qRNnwe https://www.lesswrong.com/posts/uebrPgRzKC7odCFmt/unicode-8f3y
Using Bayes' Theorem to determine Optimal Protein Intake Published on October 18, 2025 2:58 PM GMTIntroduction:Most people treat protein intake as a fixed number: “1 gram per pound of body weight” or “1.6 grams per kilogram.” That’s fine as a starting point, but in reality your protein needs fluctuate day to day. Recovery, sleep, stress, and activity levels all affect how much protein your body actually requires. Instead of blindly following a static rule, you can think of protein intake as a hypothesis about what your body needs, and use Bayes’ Theorem to update that hypothesis as your body gives feedback. Over time, this approach lets you estimate the protein amount that works best for you, rather than relying on generic recommendations.Example:Let’s imagine a simple scenario: John Protein wants to decide whether he needs H1 = 150g/day or H2 = 170g/day. He starts with equal confidence in both priors, these represent his baseline belief in what protein intake would not cause an observable deficiency.P(H1) = 0.5P(H2) = 0.5Each day, your body provides evidence about whether your protein intake is meeting your needs. This evidence comes from observable signals such as muscle soreness, fatigue, energy levels, hunger, and overall recovery. This evidence specifically consists of feeling constant fatigue, extremely low energy, long plateaus, and weakness. We can encode this as E = “evidence”.Now, we need likelihoods — how probable is this evidence under each hypothesis?P(E | H1) = 0.2 (if 150g is enough, poor recovery is unlikely)P(E | H2) = 0.7 (if 170g is needed, poor recovery is likely if you only ate 150g)Bayes’ Theorem then lets us update our belief about each hypothesis given the evidence:P(H2 | E) = [P(E | H2) * P(H2)] / ([P(E | H2) * P(H2)] + [P(E | H1) * P(H1)])Plugging in the numbers:P(H2 | E) = (0.7 * 0.5) / ((0.7 * 0.5) + (0.2 * 0.5))P(H2 | E) = 0.35 / (0.35 + 0.1)P(H2 | E) = 0.35 / 0.45 ≈ 0.778And the probability for H1 is just:P(H1 | E) = 1 – 0.778 ≈ 0.222The evidence pushes the posterior strongly toward the higher intake hypothesis. This is already more informative than the crude static rule: your body is signaling that 150g might not be enough for John's body.We can turn these posteriors into an actionable number by computing the expected protein intake:Expected intake = (P(H1 | E) * 150) + (P(H2 | E) * 170)Expected intake = (0.222 * 150) + (0.778 * 170)Expected intake ≈ 33 + 132.3 ≈ 165.3g/dayIf you eat three meals, that’s roughly 55g per meal.Applications:The beauty of Bayes’ approach is that it’s recursive. Each new day’s evidence — how sore you feel, energy, hunger, sleep quality-can be used to update your belief again. Here’s a simple table you could maintain:DayProtein IntakeEvidence/recoveryLikelihood P(E|H1)Likelihood P(E|H2)Posterior P(H1|E)Posterior P(H2|E)Expected Intake1150gPoor0.20.70.2220.778165g2160gOK0.50.30.3220.678163g3170gExcellent0.80.10.2080.792154gPoor: Constant fatigue, extremely low energyOK: Sometimes tired, medium soreness, slightly less then normal energyExcellent:  Not tired, some soreness, fully energizedEach day’s posterior becomes your prior for the next day. Over time, this process converges, producing a personalized protein estimate tuned to your body.Predictions:Using the data from the first few days and our Bayesian update, we predict that John's optimal intake is about 165g/day. That’s slightly higher than the classic 150g baseline, but lower than the 170g upper hypothesis. Spread over three meals, aim for about 55g per meal. Monitor recovery and energy for the next week, feed that back into the model, and the posterior will refine the number further.The main insight here is that nutrition becomes a continuous inference problem, not a fixed rulebook. Each meal and day is evidence, and Bayes’ Theorem gives a principled way to update your beliefs. Over a few weeks, this approach will converge to a protein intake that’s genuinely optimal for you, rather than what textbooks or influencers prescribe.https://www.lesswrong.com/posts/BNwmaPkcho5QBjp3a/using-bayes-theorem-to-determine-optimal-protein-intake#comments https://www.lesswrong.com/posts/BNwmaPkcho5QBjp3a/using-bayes-theorem-to-determine-optimal-protein-intake
Space colonization and scientific discovery could be mandatory for successful defensive AI Published on October 18, 2025 4:57 AM GMTEpistemic status: quick draft of a few hours thought, related to a few weeks cooperative research In a multipolar ASI offense/defense scenario, there seems to be a good chance that intent-aligned, friendly AI will not colonize space. This could for example happen because we intent-align defensive AI(s) with institutes under human control, such as companies, police forces, secret services, militaries or military alliances, governments, or supragovernmental organizations. The humans controlling these entities might not support space colonization, space colonization might be outside their organization’s mandate, or there might be other organizational constraints prohibiting space colonization.If an offensive AI (either unaligned, or intent-aligned with a bad actor) escapes into space, it might be able to colonize the resources it finds there. For example, it could build a laser with a beam diameter exceeding earth's and use it against us. Or, it could direct a meteorite at us large enough to cause extinction. In these scenarios, it seems impossible for earth-bound defensive AI to successfully ward off the attack, or for us, and the defensive AI(s), to recover from it.Therefore, if:We end up in a multipolar ASI offense/defense scenario (e.g. because no pivotal act was performed), andDefensive AI is intent-aligned with humans who do not effectively colonize space, andOffensive AI escapes into space, andEscaped offensive AI can mobilize space resources to build a decisively large weapon,It seems to follow that offense trumps defense, possibly leading to human extinction.More generally, a minimum viable defense theorem could be formulated for multipolar ASI offense/defense scenarios:If mobilizing resources can lead to a decisive strategic advantage, any successful (system of) defensive AI(s) should at least mobilize sufficient resources to win from any weaponry that could be constructed from the unmobilized resources.One could also imagine that weaponizing new science and technology could lead to a decisive strategic advantage. A version of this theory could therefore also be:If inventing weaponizable science and technology leads to a decisive strategic advantage, any successful (system of) defensive AIs should at least invent and weaponize sufficient science and technology to successfully defend against any weaponry that could be constructed from the uninvented science and technology.These results might be seen as a reason to:Support a pause.Perform a pivotal act (if ASI can be aligned).Make sure we align (if ASI can be aligned) defensive, friendly ASI with entities which intent to occupy sufficient strategic space in domains such as space colonization and weaponizable science.https://www.lesswrong.com/posts/eNPmAM8r8rdNMHYru/space-colonization-and-scientific-discovery-could-be#comments https://www.lesswrong.com/posts/eNPmAM8r8rdNMHYru/space-colonization-and-scientific-discovery-could-be
Meditation is dangerous Published on October 17, 2025 10:52 PM GMTHere's a story I've heard a couple of times. A youngish person is looking for some solutions to their depression, chronic pain, ennui or some other cognitive flaw. They're open to new experiences and see a meditator gushing about how amazing meditation is for joy, removing suffering, clearing one's mind, improving focus etc. They invite the young person to a meditation retreat. The young person starts making decent progress. Then they have a psychotic break and their life is ruined for years, at least. The meditator is sad, but not shocked. Then they started gushing about meditation again.If you ask an experienced meditator about these sorts of cases, they often say, "oh yeah, that's a thing that sometimes happens when meditating." If you ask why the hell they don't warn people about this, they might say: "oh, I didn't want to emphasize the dangers more because it might put people off meditation, which leads to such great benefits."Does that mean enough people already know about the dangers, or that talking about this more risks exaggerating the dangers? I don't think so. Just today, someone reacted with surprise to a tweet of mine noting that meditation is dangerous. So more people could do with hearing the message that meditation is dangerous.In a way, it is obvious that meditation is dangerous if you buy the idea that meditation gives you read/write access to your mind. Of course, you can brick something when you've got root access to it. In this way, I believe meditation is like some rationality practices. And even if you don't brick yourself, you can make https://www.lesswrong.com/posts/fhL7gr3cEGa22y93c/meditation-is-dangerous
I handbound a book of Janus's essays for my girlfriend Published on October 17, 2025 5:38 PM GMTMy girlfriend https://www.lesswrong.com/users/fiora-sunshine?mention=user https://www.lesswrong.com/posts/noioCDySDjkctFQoT/i-handbound-a-book-of-janus-s-essays-for-my-girlfriend