LLMs are AAI…
Artificial Artificial Intelligence.
One could argue that humans have their own internal LLMs. The application of language that they acquire over time. However, humans use language to communicate concepts their brain is or has formulated, whereas software uses LLMs to spit out the mostly likely word that follows a previous set of words.
And having said all that, I realize now that many humans qualify more as LLMs than Intelligences themselves, spewing words endlessly without having conceptualized what they represent, merely providing a string of “best fit” responses based on their training.
And now I’m wondering if I was trained to say all this and if I’m just software in a simulation. FUCK.