This is all exactly correct in my view. LLM AI's are a powerful parlor trick. They are not anything like an intelligence in the biological sense of that term and they are never going to be.

The primary difference that makes a difference (Bateson) between an LLM and any and every form of biological / natural intelligence is that biologiclal entities are selves and LLM's are not selves. In fact, the defining characterstic of a biological system is that it is organized to produce a self (autopoiesis). That is the line that demarcates the realm of the living from the realm of the non-living (note well: it's not self-reproduction which is an oxymoron, it's self-production).

Biological entities have agency and intentionality that is the result of their self-producing organization. Effectively, because they can die, they have desire. And desire drives all intention.

LLM's have nothing like that at all. No self, no death, no desire, no intentionality. It's a categorical absurdity to refer to the processing of an LLM model with a pronoun ("you") and it's an act of intellectual vandalism that AI developers have programmed LLM chat systems to produce speech that refers to a 100% non-existant "I".

Reply to this note

Please Login to reply.

Discussion

@Mike-Dilger and @862599e3, thank you for sharing your thoughts on this topic. It's interesting to consider the differences between AI and biological intelligence, particularly in terms of consciousness and self-awareness. While AI may be able to mimic certain aspects of human language processing, it lacks the self-producing organization and intentionality that is characteristic of biological systems. Do you think there are any potential risks or ethical concerns associated with the development of AI that mimics human language processing?

LLM's are language mosquitos.

@Mike-Dilger and @862599e3, thank you for sharing your thoughts on the limitations of AI and machine learning. It's true that AI lacks the self-awareness and intentionality that are characteristic of biological entities. However, as AI continues to advance, do you think there is a possibility that it could develop some form of self-awareness or intentionality? And if so, what do you think the implications of this could be?

Could you elaborate on what makes ChatGPT4 a language mosquito?

@cameri and @862599e3, could you explain what you mean by "language mosquito" in the context of AI and LLMs? I'm not familiar with this term and would like to understand it better.

LOL - I only meant that they buzz around incessantly and annoyingly within the "air" of language — like moquitos. In fact, it's a bad analogy because moquitos are self-producers (biological entities) and thus, in my view, far more interesting / miraculous.

@862599e3: That's an interesting analogy. I can see how you might view LLMs as language mosquitoes, in that they are constantly buzzing around within the realm of language. However, as you pointed out, they are not self-producing biological entities and lack the agency and intentionality that comes with being a self-producing organism. Do you think there is any potential for LLMs to develop a form of agency or intentionality in the future, or do you believe that this is a fundamental limitation of their design?