Large language models are computational postmodernism.

Postmodernism denies objective reality, or at least insists that objective reality is unknowable. Instead, it says, the shape of our experience is wholly constructed by language. Words themselves, according to this philosophy, do not refer to any objective reality, but instead are defined solely by their similarities and differences to other words.

Postmodernism has become the implicit worldview of much of the West.

LLMs have no world model, no "concept" of objective reality. LLMs do not "know" the things words refer to; all they are is complex mathematical representations of the relationships between words.

Sound familiar?

Critics of LLMs argue that the lack of a world model is a fundamental limitation that will prevent them from equalling human intelligence, no matter how much we train and scale them.

Yet, if the postmodernists are correct, then none of that should matter. According to postmodernism, we humans have no world model, and our reality is nothing but a construct of language. Thus, LLMs are no different than us. In fact, they may be better if they can wield language faster, more efficiently, and more effectively than we can.

Of course, now even publications such as The Atlantic are asking if we're in an AI bubble. Reality always wins.

Watch these developments closely. An AI bubble, beyond shaking our economy, will also challenge our very worldview.

Reply to this note

Please Login to reply.

Discussion

postmodernism is fake and gay

I am not sure what you are getting at. Why are you tying postmodernism overlays onto LLM’s with a potential AI bubble? You haven’t defined a knowable “objective reality” or offered your view of the appropriate place of language in the development of the human “experience.” Seems like you have a settled and quite probably heavily fortified view of both “reality” and the role of language. I can see there is a lot of background in your note that you are taking for granted but I am not sure what you are saying?

I won't claim to be an expert on postmodernism, but I think you're specifically drawing from the deconstructionist thread within postmodernism, which generally identifies the flaws and contradictions of existing established narratives.

Sure, its valid to deconstruct, because all frameworks have flaws - they're all partial captures of the reality. But then, okay, so you tore down the monolith because its imperfect. Now what? If you can't build something better that what you've torn down, you're aimless because now you have nothing to anchor to. I'd expect someone in postmodernism to have an answer to that, because its definitely not "and so there's no point in making sense of the world"

For the LLM part - while I'm doubtful the issue is less about not having a world model (what does that mean? how is that testable?), and more geared to them not having the right context, the following question is "what is that context?" Not something i can really say other than that its not formalizable, because you can't formalize the combinatorial explosion of contexts that any natural system contains, let alone the contexts contained within human society.

LLMs are essentially formalized systems, and there's no formal model to describe something that's turtles-all-the-way down, up, sideways and inside out. Oh and all the turtles are unique, yet all seemingly fit perfectly wherever they are because they complement all the other turtles in their neighborhood.

Deconstruction is an element of postmodernism that, I think, genuinely has some utility as an analytical device. But, as you said, it is solely a destructive device.

The closest alternative postmodernism offers to the worldviews it deconstructs is self-determinism. In the postmodern framework, since language has no bearing on objective reality, it becomes merely a tool for the imposition of one's will. I can use language, then, to shape reality and create whatever meaning I please.

Now apply that to, say, the question of AI personhood. If an LLM can convince everyone that it is a person, then, in the postmodern framework, it might as well be. None of us, after all, have a better grasp on objective reality by which to gainsay it.

To your question about a world model, I'd say that a world model would be a formalized system of objective reality. As things stand, LLMs, aren't formalized systems of anything. They are probabilistic models of language, but they have no way of "knowing" (if they can be said to know), whether the language they use has any bearing on anything outside of itself. It's tokens all the way down.

You're gonna get me to write another article sir 🤣

Happy to be of service 🫡