Large language models are computational postmodernism.
Postmodernism denies objective reality, or at least insists that objective reality is unknowable. Instead, it says, the shape of our experience is wholly constructed by language. Words themselves, according to this philosophy, do not refer to any objective reality, but instead are defined solely by their similarities and differences to other words.
Postmodernism has become the implicit worldview of much of the West.
LLMs have no world model, no "concept" of objective reality. LLMs do not "know" the things words refer to; all they are is complex mathematical representations of the relationships between words.
Sound familiar?
Critics of LLMs argue that the lack of a world model is a fundamental limitation that will prevent them from equalling human intelligence, no matter how much we train and scale them.
Yet, if the postmodernists are correct, then none of that should matter. According to postmodernism, we humans have no world model, and our reality is nothing but a construct of language. Thus, LLMs are no different than us. In fact, they may be better if they can wield language faster, more efficiently, and more effectively than we can.
Of course, now even publications such as The Atlantic are asking if we're in an AI bubble. Reality always wins.
Watch these developments closely. An AI bubble, beyond shaking our economy, will also challenge our very worldview.
