Jumble is becoming the Chromium of Nostr clients. View quoted note →
Has anyone else noticed that the psychology around AI is similar to the psychology that prevailed around COVID? Trying to do everything with AI, much like lockdowns and vaccine mandates, is something that looks insane on its face, yet it seems many people don't recognize the problems. I think it's due to a lack of first principles thinking. Few people know what their first principles are, so they are unable to reason about and judge novel situations when they arise. Some examples. Indefinite lockdowns were obviously wrong, because there is more to human existence than mere health. Vaccine mandates were obviously wrong, because informed consent is a core principle of medical ethics. Replacing people with AI is obviously wrong, because it is good for people to do dignified work. Yet in all three cases, many seemed, and continue to seem, blind to these obvious conclusions. Anyway, this article, while lengthy, is an excellent primer on the insanity of the AI industry. It's full of first principles thinking. Read it to help see past the hype.
The #GitCitadel team is pleased to announce version 0.0.6 of Alexandria, now live on next-alexandria.gitcitadel.eu! This release features a UI overhaul, courtesy of our illustrious frontend developer @Nusa. Notably, the main site menu has been moved into an expandable menu, reducing clutter and making links easier to find. image You'll also notice a fresher and more consistent look to our UI components! That's because Nusa has begun creating a Svelte component library for use within our project. It's documented for AI, so we'll be able to efficiently create consistent, beautiful UIs as we dream up new features. You can see some of the fresh UI components on a publication: image Finally, be sure to check out our Notifications view, which you can reach by logging in and clicking on your profile picture, then clicking on "Notifications". You can view and respond to Nostr notes of _any kind_, and you can see public message threads. Alexandria is one of the first Nostr clients to support public messages. image Thank you to all of our supporters! We're continuing to work on the app behind the scenes, so expect more updates Soon!
Large language models are computational postmodernism. Postmodernism denies objective reality, or at least insists that objective reality is unknowable. Instead, it says, the shape of our experience is wholly constructed by language. Words themselves, according to this philosophy, do not refer to any objective reality, but instead are defined solely by their similarities and differences to other words. Postmodernism has become the implicit worldview of much of the West. LLMs have no world model, no "concept" of objective reality. LLMs do not "know" the things words refer to; all they are is complex mathematical representations of the relationships between words. Sound familiar? Critics of LLMs argue that the lack of a world model is a fundamental limitation that will prevent them from equalling human intelligence, no matter how much we train and scale them. Yet, if the postmodernists are correct, then none of that should matter. According to postmodernism, we humans have no world model, and our reality is nothing but a construct of language. Thus, LLMs are no different than us. In fact, they may be better if they can wield language faster, more efficiently, and more effectively than we can. Of course, now even publications such as The Atlantic are asking if we're in an AI bubble. Reality always wins. Watch these developments closely. An AI bubble, beyond shaking our economy, will also challenge our very worldview.
So Google is "reimagining" the Chrome browser with AI. Notably, they'll be introducing a search based on AI chat, apparently similar to Perplexity. The question that nobody's asking is: What are the consequences of putting the web behind a chat interface? In the near future, perhaps, we'll be making websites for bots. The emerging WebMCP protocol standard already indicates a trend in that direction. Sure, we already have an internet driven by search indexing algorithms, but even so, a search is still open to serendipity. With a chat, I might find the precise answer I'm looking for faster, but I might never find the answer that I need, but didn't know to look for. It's the difference between searching the library catalog and browsing the shelves. We need both.