Orienting Toward Wizard Power Published on May 8, 2025 5:23 AM GMTFor months, I had the feeling: something is wrong. Some core part of myself had gone missing.I had words and ideas cached, which pointed back to the missing part.There was https://www.lesswrong.com/posts/YABJKJ3v97k9sbxwg/what-money-cannot-buy https://www.lesswrong.com/posts/Wg6ptgi2DupFuAnXG/orienting-toward-wizard-power
There's more low-hanging fruit in interdisciplinary work thanks to LLMs Published on May 7, 2025 7:48 PM GMTI'm right now doing conceptual theoretical work about how the human fascia system works. While I do rely on some original conceptual insights that I have e come up with on my own, Gemini 2.5 Pro massively speeds up my conceptual work. Being able to formulate a principle or analogy and then having Gemini apply it, is very useful.There are a bunch of scientific fields where we currently have a lot of experimental data but lack coherent theory to interlink the experimental findings. Based on my own experience, current LLMs seem already to be powerful enough to help bridge that theory gap. Being able to ask "Hey, does field XYZ have any useful insights to the problem I'm tackling?" is also very helpful for making progress in theory.The LLMs also solve a key problems that autodidact have when it comes with existing scientific fields. If you have a new idea, they are good at telling you the main criticisms that would come from an orthodox researcher in a field. We might see a rise in interdisciplinary work that didn't happen in the past because of academia's hyperspecialization. People frequently say that progress in science has stalled because there's little low-hanging fruit. When it comes to doing certain interdisciplinary work, it's now a lot easier to pick the fruit. If you are right now starting a scientific career, think about what kind of interdisciplinary work you might do, where it's now easier to make progress because of the existence of LLMs. If you have a research question, one approach you can do is to ask a reasoning model to create a debate between two highly skilled researchers with different approaches to debate your research question. You might learn valuable insights about your research question this way. Besides taking existing researchers in the field, asking the LLM to simulate philosophers and tell the LLM that the philosophers understand all the facts about a field, might give you valuable insights of how insights that philosophers found through a lot of hard work translate into individual fields. It's not clear what the best approaches are to get the LLM to help you with interdisciplinary work, but there's a lot of fruit out there to be picked right now.https://www.lesswrong.com/posts/XYzChDaFZzifYtLJ2/untitled-draft-ykgk#comments https://www.lesswrong.com/posts/XYzChDaFZzifYtLJ2/untitled-draft-ykgk
OpenAI Claims Nonprofit Will Retain Nominal Control Published on May 7, 2025 7:40 PM GMTYour voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result https://openai.com/index/evolving-our-structure/ under their new plan, and both companies will retain the original mission. Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’ Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on. The letter ‘ ’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations. The question is, what exactly is the new plan? The fight is far from over. Table of Contents The Mask Stays On? As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure. However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC. Bret Taylor (Chairman of the Board, OpenAI): The OpenAI Board has an updated plan for evolving OpenAI’s structure. OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit. Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission. The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits. Our mission remains the same, and the PBC will have the same mission. We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California. We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission of ensuring AGI benefits all of humanity. Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction. The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to https://openai.com/index/evolving-our-structure/ . Sam Altman (CEO OpenAI): The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too. Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler. The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission. (OpenAI, Head of Mission Alignment): OpenAI is, and always will be, a mission-first organization. Today’s update is an affirmation of our continuing commitment to ensure that AGI benefits all of humanity. Your Offer is (In Principle) Acceptable I find the structure of this solution not ideal but ultimately acceptable. The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems. Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts. The devil is in the details. How is all this going to work? What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else? This deal is only acceptable if and only he nonprofit: Has truly robust control going forward, that is ironclad and that allows it to guide AI development in practice not only in theory. Is this going to only be via voting shares? That would be a massive downgrade from the current power of the board, which already wasn’t so great. In practice, the ability to win a shareholder vote will mean little during potentially crucial fights like a decision whether to release a potentially dangerous model. What this definitely still does is give cover to management to do the right thing, if they actively want to do that, I’ll discuss more later. Gets a fair share of the profits, that matches the value of its previous profit interests. I am very worried they will still get massively stolen from on this. As a reminder, right now most of the net present value of OpenAI’s future profits belongs to the nonprofit. Uses those profits to advance its original mission rather than turning into a de facto marketing arm or doing generic philanthropy that doesn’t matter, or both. There are still clear signs that OpenAI is largely planning to have the nonprofit buy AI services on behalf of other charities, or otherwise do things that are irrelevant to the mission. That would make it an ‘ordinary foundation’ combined with a marketing arm, effectively making its funds useless, although it could still act meaningfully via its control mechanisms. Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing. Or: : <img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/spAL6iywhDiPWm4HR/e9wqpgyhxbec0hrbr81z" alt=""> Quintin Pope: Common mistake. They forgot to paint “Do Not Open” on the box. There’s also the issue of the extent to which Altman controls the nonprofit board. The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted. So with love, now that we have established what you are, https://quoteinvestigator.com/2012/03/07/haggling/ . The Skeptical Take He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow. . Which, given the track records here, seems like a highly reasonable place to start. The central things to know about the new plan are indeed: The transition to a PBC and removal of the profit cap will still shift priorities, legal obligations and incentives towards profit maximization. The nonprofit’s ‘control’ is at best weakened, and potentially fake. The nonprofit’s mission might effectively be fake. The nonprofit’s current financial interests could largely still be stolen. It’s an improvement, but it might not effectively be all that much of one? We need to stay vigilant. The fight is far from over. Rob Wiblin: So OpenAI just said it’s no longer going for-profit and the non-profit will ‘retain control’. But don’t declare victory yet. OpenAI may actually be continuing with almost the same plan & hoping they can trick us into thinking they’ve stopped! Or perhaps not. I’ll explain: The core issue is control of OpenAI’s behaviour, decisions, and any AGI it produces. Will the entity that builds AGI still have a legally enforceable obligation to make sure AGI benefits all humanity? Will the non-profit still be able to step in if OpenAI is doing something appalling and contrary to that mission? Will the non-profit still own an AGI if OpenAI develops it? It’s kinda important! The new announcement doesn’t answer these questions and despite containing a lot of nice words the answers may still be: no. (Though we can’t know and they might not even know themselves yet.) The reason to worry is they’re still planning to convert the existing for-profit into a Public Benefit Corporation (PBC). That means the profit caps we were promised would be gone. But worse… the nonprofit could still lose true control. Right now, the nonprofit owns and directly controls the for-profit’s day-to-day operations. If the nonprofit’s “control” over the PBC is just extra voting shares, that would be a massive downgrade as I’ll explain. (The reason to think that’s the plan is that today’s announcement sounded very similar to a proposal they floated in Feb in which the nonprofit gets special voting shares in a new PBC.) Special voting shares in a new PBC are simply very different and much weaker than the control they currently have! First, in practical terms, voting power doesn’t directly translate to the power to manage OpenAI’s day-to-day operations – which the non-profit currently has. If it doesn’t fight to retain that real power, the non-profit could lose the ability to directly manage the development and deployment of OpenAI’s technology. That includes the ability to decide whether to deploy a model (!) or license it to another company. Second, PBCs have a legal obligation to balance public interest against shareholder profits. If the nonprofit is just a big shareholder with super-voting shares other investors in the PBC could sue claiming OpenAI isn’t doing enough to pursue their interests (more profits)! Crazy sounding, but true. And who do you think will be more vociferous in pursuing such a case through the courts… numerous for-profit investors with hundreds of billions on the line, or a non-profit operated by 9 very busy volunteers? Hmmm. In fact in 2019, OpenAI President Greg Brockman said one of the reasons they chose their current structure and not a PBC was exactly because it allowed them to custom-write binding rules including full control to the nonprofit! So they know this issue — and now want to be a PBC. If this is the plan it could mean OpenAI transitioning from: • A structure where they must prioritise the nonprofit mission over shareholders To: • A new structure where they don’t have to — and may not even be legally permitted to do so. (Note how it seems like the non-profit is giving up a lot here. What is it getting in return here exactly that makes giving up both the profit caps and true control of the business and AGI the best way to pursue its mission? It seems like nothing to me.) So, strange as it sounds, this could turn out to be an even more clever way for Sam and profit-motivated investors to get what they wanted. Profit caps would be gone and profit-motivated investors would have much more influence. And all the while Sam and OpenAI would be able to frame it as if nothing is changing and the non-profit has retained the same control today they had yesterday! (As an aside it looks like the SoftBank funding round that was reported as requiring a loss of nonprofit control would still go through. Their press release indicates that actually all they were insisting on was that the profit caps are removed and they’re granted shares in a new PBC. So it sounds like investors think this new plan would transfer them enough additional profits, and sufficiently neuter the non-profit, for them to feel satisfied.). Now, to be clear, the above might be wrongheaded. I’m looking at the announcement cynically, assuming that some staff at OpenAI, and some investors, want to wriggle out of non-profit control however they can — because I think we have ample evidence that that’s the case! The phrase “nonprofit control” is actually very vague, and those folks might be trying to ram a truck through that hole. At the same time maybe / hopefully there are people involved in this process who are sincere and trying to push things in the right direction. On that we’ll just have to wait and see and judge on the results. Bottom line: The announcement might turn out to be a step in the right direction, but it might also just be a new approach to achieve the same bad outcome less visibly. So do not relax. And if it turns out they’re trying to fool you, don’t be fooled. : The nonprofit will retain control of OpenAI. We still need stronger oversight and broader input on whether and how AI is pursued at OpenAI and all the AI companies, but this is an important bar to see upheld, and I’m proud to have helped push for it! Now it is time to make sure that control is real—and to guard against any changes that make it harder than it already is to strengthen public accountability. The devil is in the details we don’t know yet, so the work continues. Tragedy in the Bay Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits. : the idea of openai having a charter is interesting to me. A relic from a bygone era, belief that governance innovation for important institutions is even possible. Interested parties are tasked with performing exegesis of the founding documents. Seems clear that the “capped profit” mechanism is from a time in which people assumed agi development would be more singular than it actually is. There are many points on the intelligence curve and many players. We should be discussing when Nvidia will require profit caps. I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario. More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win. At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision. The Spirit of the Rules For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here. The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain. The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake. The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could , with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits. This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily. It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, , and certainly to do so if the wrong thing maximizes shareholder value in the short term. Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them. It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms. Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing. The good news is, I believe that today is not that today. And tomorrow looks good, too.        https://www.lesswrong.com/posts/spAL6iywhDiPWm4HR/openai-claims-nonprofit-will-retain-nominal-control#comments https://www.lesswrong.com/posts/spAL6iywhDiPWm4HR/openai-claims-nonprofit-will-retain-nominal-control
UK AISI’s Alignment Team: Research Agenda Published on May 7, 2025 4:33 PM GMTThe UK’s AI Security Institute published its  https://www.lesswrong.com/posts/tbnw7LbNApvxNLAg8/uk-aisi-s-alignment-team-research-agenda
Four Predictions About OpenAI's Plans To Retain Nonprofit Control Published on May 7, 2025 3:48 PM GMThttps://www.lesswrong.com/posts/h9Hy5vq9QztoA3qLo/four-predictions-about-openai-s-plans-to-retain-nonprofit#comments https://www.lesswrong.com/posts/h9Hy5vq9QztoA3qLo/four-predictions-about-openai-s-plans-to-retain-nonprofit
Chess - "Elo" of random play? Published on May 7, 2025 2:18 AM GMTI'm interested in a measure of  chess-playing ability that doesn't depend on human players, and while perfect play would be the ideal reference, as long as chess remains unsolved, the other end of the spectrum, the engine whose algorithm is "list all legal moves and uniformly at random pick one of them," seems the natural choice. I read that the formula for Elo rating E is scaled so that, with some assumptions of transitivity of winning odds, pvictory≈11+10ΔE/400,  so it's trivial to convert probability to Elo rating, and my question is roughly equivalent to "What is the probability of victory of random play against, say, Stockfish 17?"  If the Elo is close to 0<a href="#fnp2obaqo0e5" rel="nofollow">[1]</a>, I think that makes  the probability around 10−9 (estimating Stockfish 17's Elo to be 3600). The y-intercept of https://www.lesswrong.com/posts/gx7FuJW9cjwHAZwxh/chess-elo-of-random-play
$500 + $500 Bounty Problem: An (Approximately) Deterministic Maximal Redund Always Exists Published on May 6, 2025 11:05 PM GMTA lot of our work involves "redunds".<a href="#fnns2mboocb8b" rel="nofollow">[1]</a> A random variable Γ is a(n exact) redund over two random variables X1,X2 exactly when bothX1→X2→ΓX2→X1→ΓConceptually, these two diagrams say that X1 gives exactly the same information about Γ as all of X, and X2 gives exactly the same information about Γ as all of X; whatever information X contains about Γ is redundantly represented in X1 and X2. Unpacking the diagrammatic notation and simplifying a little, the diagrams say P[Γ|X1]=P[Γ|X2]=P[Γ|X] for all X such that P[X]>0.The exact redundancy conditions are too restrictive to be of much practical relevance, but we are more interested in approximate redunds. Approximate redunds are defined by https://www.lesswrong.com/posts/XHtygebvHoJSSeNPP/some-rules-for-an-algebra-of-bayes-nets https://www.lesswrong.com/posts/sCNdkuio62Fi9qQZK/usd500-usd500-bounty-problem-an-approximately-deterministic
No title Published on May 6, 2025 10:23 PM GMThttps://www.lesswrong.com/posts/shTQiu6zNtJ257Ra9/loss-exp-learned-concepts#comments https://www.lesswrong.com/posts/shTQiu6zNtJ257Ra9/loss-exp-learned-concepts
Zuckerberg’s Dystopian AI Vision Published on May 6, 2025 1:50 PM GMTYou think it’s bad now? Oh, you have no idea. In his talks with Ben Thompson and Dwarkesh Patel, Zuckerberg lays out his vision for our AI future. I thank him for his candor. I’m still kind of boggled that he said all of it out loud. We will start with the situation now. How are things going on Facebook in the AI era? Oh, right. : Again, it happened again. Opened Facebook and I saw this. I looked at the comments and they’re just unsuspecting boomers congratulating the fake AI gen couple<img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/B7duehMp2mSvffu2T/fwtbw8tuuhylggecsgrs"> <img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QNkcRAzwKYGpEb8Nj/v2azat4lttxmshzna5rc" alt=""> Deepfates: You think those are real boomers in the comments? <img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QNkcRAzwKYGpEb8Nj/akmquzfjrlyeiwika5qt" alt=""> This continues to be 100% Zuckerberg’s fault, and 100% an intentional decision. The algorithm knows full well what kind of post this is. It still floods people with them, especially if you click even once. If they wanted to stop it, they easily could. There’s also the rather insane and deeply embarrassing AI bot accounts they have tried out on Facebook and Instagram. Compared to his vision of the future? You aint seen nothing yet. Zuckerberg Tells it to Thompson , centering on business models. It was like if you took a left wing caricature of why Zuckerberg is evil, combined it with a left wing caricature about why AI is evil, and then fused them into their final form. Except it’s coming directly from Zuckerberg, as explicit text, on purpose. It’s understandable that many leave such interviews and related stories saying this: : Big tech atomises you, isolates you, makes you lonely and depressed – then it rents you an AI friend, and AI therapist, an AI lover. Big tech are parasites who pretend they are here to help you. When asked what he wants to use AI for, Zuckerberg’s primary answer is advertising, in particular an ‘ultimate black box’ where you ask for a business outcome and the AI does what it takes to make that outcome happen. I leave all the ‘do not want’ and ‘misalignment maximalist goal out of what you are literally calling a black box, film at 11 if you need to watch it again’ and ‘general dystopian nightmare’ details as an exercise to the reader. He anticipates that advertising will then grow from the current 1%-2% of GDP to something more, and Thompson is ‘there with’ him, ‘everyone should embrace the black box.’ His number two use is ‘growing engagement on the customer surfaces and recommendations.’ As in, advertising by another name, and using AI in predatory fashion to maximize user engagement and drive addictive behavior. In case you were wondering if it stops being this dystopian after that? Oh, hell no. Mark Zuckerberg: You can think about our products as there have been two major epochs so far. The first was you had your friends and you basically shared with them and you got content from them and now, we’re in an epoch where we’ve basically layered over this whole zone of creator content. So the stuff from your friends and followers and all the people that you follow hasn’t gone away, but we added on this whole other corpus around all this content that creators have that we are recommending. Well, the third epoch is I think that there’s going to be all this AI-generated content… … So I think that these feed type services, like these channels where people are getting their content, are going to become more of what people spend their time on, and the better that AI can both help create and recommend the content, I think that that’s going to be a huge thing. So that’s kind of the second category. … The third big AI revenue opportunity is going to be business messaging. … And the way that I think that’s going to happen, we see the early glimpses of this because business messaging is actually already a huge thing in countries like Thailand and Vietnam. So what will unlock that for the rest of the world? It’s like, it’s AI making it so that you can have a low cost of labor version of that everywhere else. Also he thinks everyone should have an AI therapist, and that people want more friends so AI can fill in for the missing humans there. Yay. : I don’t really have words for how much I hate this But I also don’t have a solution for how to combat the genuine isolation and loneliness that people suffer from AI friends are, imo, just a drug that lessens the immediate pain but will probably cause far greater suffering Well, I guess the fourth one is the normal ‘everyone use AI now,’ at least? And then, the fourth is all the more novel, just AI first thing, so like Meta AI. He’s Still Defending Llama 4 He also blames Llama-4’s terrible reception on user error in setup, and says they now offer an API so people have a baseline implementation to point to, and says essentially ‘well of course we built a version of Llama-4 specifically to score well on Arena, that only shows off how easy it is to steer it, it’s good actually.’ Neither of them, of course, even bothers to mention any downside risks or costs of open models. Big Meta Is Watching You The killer app of Meta AI is that it will know all about all your activity on Facebook and Instagram and use it against for you, and also let you essentially ‘talk to the algorithm’ which I do admit is kind of interesting but I notice Zuckerberg didn’t mention an option to tell it to alter the algorithm, and Thompson didn’t ask. There is one area where I like where his head is at: I think one of the things that I’m really focused on is how can you make it so AI can help you be a better friend to your friends, and there’s a lot of stuff about the people who I care about that I don’t remember, I could be more thoughtful. There are all these issues where it’s like, “I don’t make plans until the last minute”, and then it’s like, “I don’t know who’s around and I don’t want to bug people”, or whatever. An AI that has good context about what’s going on with the people you care about, is going to be able to help you out with this. That is… not how I would implement this kind of feature, and indeed the more details you read the more Zuckerberg seems determined to do even the right thing in the most dystopian way possible, but as long as it’s fully opt-in (if not, wowie moment of the week) then at least we’re trying at all. Zuckerberg Tells it to Patel There was good content here, Zuckerberg in many ways continues to be remarkably candid. But it wasn’t as dense or hard hitting as many of Patel’s other interviews. One key difference between the interviews is that when Zuckerberg lays out his dystopian vision, you get the sense that Thompson is for it, whereas Patel is trying to express that maybe we should be concerned. Another is that Patel notices that there might be more important things going on, whereas to Thompson nothing could be more important than enhancing ad markets. When asked what changed since Llama 3, Zuckerberg leads off with the ‘personalization loop.’ Zuckerberg still claims Llama 4 Scout and Maverick are top notch. Okie dokie. He doubles down on ‘open source will become most used this year’ and that this year has been Great News For Open Models. Okie dokie. His heart’s clearly not in claiming it’s a good model, sir. His heart is in it being a good model for Meta’s particular commercial purposes and ‘product value’ as per people’s ‘revealed preferences.’ That’s the modes he talked about with Thompson. He’s very explicit about this. OpenAI and Anthropic are going for AGI and a world of abundance, with Anthropic focused on coding and OpenAI towards reasoning. Meta wants fast, cheap, personalized, easy to interact with all day, and (if you add what he said to Thompson) to optimize feeds and recommendations for engagement, and to sell ads. It’s all for their own purposes. He says Meta is specifically creating AI tools to write their own code for internal use, but I don’t understand what makes that different from a general AI coder? Or why they think their version is going to be better than using Claude or Gemini? This feels like some combination of paranoia and bluff. Thus, Meta seems to at this point be using the open model approach as a recruiting or marketing tactic? I don’t know what else it’s actually doing for them. As Dwarkesh notes, Zuckerberg is basically buying the case for superintelligence and the intelligence explosion, then ignoring it to form an ordinary business plan, and of course to continue to have their safety plan be ‘lol we’re Meta’ and release all their weights. I notice I am confused why their tests need hundreds of thousands or millions of people to be statistically significant? Impacts must be very small and also their statistical techniques they’re using don’t seem great. But also, it is telling that his first thought of experiments to run with AI are being run on his users. In general, Zuckerberg seems to be thinking he’s running an ordinary dystopian tech company doing ordinary dystopian things (except he thinks they’re not dystopian, which is why he talks about them so plainly and clearly) while other companies do other ordinary things, and has put all the intelligence explosion related high weirdness totally out of his mind or minimized it to specific use cases, even though he intellectually knows that isn’t right. He, CEO of Meta, says people use what is valuable to them and people are smart and know what is valuable in their lives, and when you think otherwise you’re usually wrong. Queue the laugh track. First named use case is talking through difficult conversations they need to have. I do think that’s actually a good use case candidate, but also easy to pervert. (29:40) The friend quote: The average American only has three friends ‘but has demand for meaningfully more, something like 15… They want more connection than they have.’ His core prediction is that AI connection will be a compliment to human connection rather than a substitute. I tentatively agree with Zuckerberg, if and only if the AIs in question are engineered (by the developer, user or both, depending on context) to be complements rather than substitutes. You can make it one way. However, when I see Meta’s plans, it seems they are steering it the other way. Zuckerberg is making a fully general defense of adversarial capitalism and attention predation – if people are choosing to do something, then later we will see why it turned out to be valuable for them and why it adds value to their lives, including virtual therapists and virtual girlfriends. But this proves (or implies) far too much as a general argument. It suggests full anarchism and zero consumer protections. It applies to heroin or joining cults or being in abusive relationships or marching off to war and so on. We all know plenty of examples of self-destructive behaviors. Yes, the great classical liberal insight is that mostly you are better off if you let people do what they want, and getting in the way usually backfires. If you add AI into the mix, especially AI that moves beyond a ‘mere tool,’ and you consider highly persuasive AIs and algorithms, asserting ‘whatever the people choose to do must be benefiting them’ is Obvious Nonsense. I do think virtual therapists have a lot of promise as value adds, if done well. And also great danger to do harm, if done poorly or maliciously. Dwarkesh points out the danger of technology reward hacking us, and again Zuckerberg just triples down on ‘people know what they want.’ People wouldn’t let there be things constantly competing for their attention, so the future won’t be like that, he says. Is this a joke? I do get that the right way to design AI-AR glasses is as great glasses that also serve as other things when you need them and don’t flood your vision, and that the wise consumer will pay extra to ensure it works that way. But where is this trust in consumers coming from? Has Zuckerberg seen the internet? Has he seen how people use their smartphones? Oh, right, he’s largely directly responsible. Frankly, the reason I haven’t tried Meta’s glasses is that Meta makes them. They do sound like a nifty product otherwise, if execution is good. Zuckerberg is a fan of various industrial policies, praising the export controls and calling on America to help build new data centers and related power sources. Zuckerberg asks, would others be doing open models if Meta wasn’t doing it? Aren’t they doing this because otherwise ‘they’re going to lose?’ Do not flatter yourself, sir. They’re responding to DeepSeek, not you. And in particular, they’re doing it to squash the idea that r1 means DeepSeek or China is ‘winning.’ Meta’s got nothing to do with it, and you’re not pushing things in the open direction in a meaningful way at this point. His case for why the open models need to be American is because our models embody an America view of the world in a way that Chinese models don’t. Even if you agree that is true, it doesn’t answer Dwarkesh’s point that everyone can easily switch models whenever they want. Zuckerberg then does mention the potential for backdoors, which is a real thing since ‘open model’ only means open weights, they’re not actually open source so you can’t rule out a backdoor. Zuckerberg says the point of Llama Behemoth will be the ability to distill it. So making that an open model is specifically so that the work can be distilled. But that’s something we don’t want the Chinese to do, asks Padme? And then we have a section on ‘monetizing AGI’ where Zuckerberg indeed goes right to ads and arguing that ads done well add value. Which they must, since consumers choose to watch them, I suppose, per his previous arguments? When You Need a Friend To be fair, yes, it is hard out there. We all and our options are limited. y (reprise from last week): Zuckerberg explaining how Meta is creating personalized AI friends to supplement your real ones: “The average American has 3 friends, but has demand for 15.” Daniel Eth: This sounds like something said by an alien from an antisocial species that has come to earth and is trying to report back to his kind what “friends” are. https://x.com/SamRo/status/1917921435273637965 imagine having 15 friends. https://x.com/modestproposal1/status/1917941523854881228 ): “The Trenchcoat Mafia. No one would play with us. We had no friends. The Trenchcoat Mafia. Hey I saw the yearbook picture it was six of them. I ain’t have six friends in high school. I don’t got six friends now.” https://x.com/kevinroose/status/1918330595626893472 : The Meta vision of AI — hologram Reelslop and AI friends keeping you company while you eat breakfast alone — is so bleak I almost can’t believe they’re saying it out loud. Exactly how dystopian are these ‘AI friends’ going to be? https://x.com/gfodor/status/1918171348922450264 (being modestly unfair): What he’s not saying is those “friends” will seem like real people. Your years-long friendship will culminate when they convince you to buy a specific truck. Suddenly, they’ll blink out of existence, having delivered a conversion to the company who spent $3.47 to fund their life. Soible_VR: not your weights, not your friend. Why would they then blink out of existence? There’s still so much more that ‘friend’ can do to convert sales, and also you want to ensure they stay happy with the truck and give it great reviews and so on, and also you don’t want the target to realize that was all you wanted, and so on. The true ‘AI https://en.wikipedia.org/wiki/Maniac_(miniseries) ’ plays the long game, and is happy to stick around to monetize that bond – or maybe to get you to pay to keep them around, plus some profit margin. The good ‘AI friend’ world is, again, one in which the AI friends are complements, or are only substituting while you can’t find better alternatives, and actively work to help you get and deepen ‘real’ friendships. Which is totally something they can do. Then again, what happens when the AIs really are above human level, and can be as good ‘friends’ as a person? Is it so impossible to imagine this being fine? Suppose the AI was set up to perfectly imitate a real (remote) person who would actually be a good friend, including reacting as they would to the passage of time and them sometimes reaching out to you, and also that they’d introduce you to their friends which included other humans, and so on. What exactly is the problem? And if you then give that AI ‘enhancements,’ such as happening to be more interested in whatever you’re interested in, having better information recall, watching out for you first more than most people would, etc, at what point do you have a problem? We need to be thinking about these questions now. Perhaps That Was All a Bit Harsh I do get that, in his own way, the man is trying. You wouldn’t talk about these plans in this way if you realized how the vision would sound to others. I get that he’s also talking to investors, but he has full control of Meta and isn’t raising capital, https://stratechery.com/2025/meta-earnings-metas-deteriorating-ad-metrics-capex-meta/ . In some ways this is a microcosm of key parts of the alignment problem. I can see the problems Zuckerberg thinks he is solving, the value he thinks or claims he is providing. I can think of versions of these approaches that would indeed be ‘friendly’ to actual humans, and make their lives better, and which could actually get built. Instead, on top of the commercial incentives, all the thinking feels alien. The optimization targets are subtly wrong. There is the assumption that the map corresponds to the territory, that people will know what is good for them so any ‘choices’ you convince them to make must be good for them, no matter how distorted you make the landscape, without worry about addiction to Skinner boxes or myopia or other forms of predation. That the collective social dynamics of adding AI into the mix in these ways won’t get twisted in ways that make everyone worse off. And of course, there’s the continuing to model the future world as similar and ignoring the actual implications of the level of machine intelligence we should expect. I do think there are ways to do AI therapists, AI ‘friends,’ AI curation of feeds and AI coordination of social worlds, and so on, that contribute to human flourishing, that would be great, and that could totally be done by Meta. I do not expect it to be at all similar to the one Meta actually builds.https://www.lesswrong.com/posts/QNkcRAzwKYGpEb8Nj/zuckerberg-s-dystopian-ai-vision#comments https://www.lesswrong.com/posts/QNkcRAzwKYGpEb8Nj/zuckerberg-s-dystopian-ai-vision
My Reasons for Using Anki Published on May 6, 2025 7:01 AM GMTIntroductionIn some circles, having an Anki habit seems to hold similar weight to clichés like "you should meditate", "you should eat healthy", or "you should work out". There's a sense that "doing Anki is good", despite most people in the circles not actually using memory systems.I've been using my memory system, Anki, daily for two or more years now. Here are the high-level reasons I use memory systems. I don't think memory systems are a cure-all; on occasion, I doubt their value. However, Anki provides enough benefit for me to spend 1h/day reviewing flashcards. This blog post explains my reasons for spending >100 hours using Anki this past college semester. This blog post will provide insight for both people with a memory system practice and those who are considering one.<img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kBA4zRdzxutonRrzg/gtiz40uux9cmgzpasmkv" alt="">~my anki heatmap~My reasons for using AnkiLearn things quickly and effectivelyAbove all, my use of Anki doesn't fit into neat learning projects. The most meaningful and interesting Anki cards have come from spontaneous cards guided by my natural curiosity and https://supermemo.guru/wiki/Learn_drive https://www.lesswrong.com/posts/kBA4zRdzxutonRrzg/my-reasons-for-using-anki