Interestingly the criticism of LLM based AI's is that they are only prediction engines, attempting to predict the next word or phrase.

This understanding is correct.

The lack of understanding, however, is that is what we, as humans,

I left that sentence unfinished to prove my point.

Reply to this note

Please Login to reply.

Discussion

True, we are biological LLMs with some "unknown" transcendental/metaphysical core.

Consciousness, now that's a much more interesting thing 😂

Consciusness is a beautiful mistery, but intelligence is everywhere. We have really any monopoly on it.

.. i like coffe?

I prefer Covfefe 😂

For me, this is all fine so far. The problem is that, on the one hand, we cannot access the probabilities (or logits) associated with the chosen tokens in non open-source models. A model may be uncertain about an answer and still produce an incorrect response.

It is crucial to have access to the level of confidence behind a model’s answers. Somehow, the uncertainty associated with an output needs to be quantified, and the user should be made aware of it.

True, but ironically, the Dunning-Kruger hypothesis proves that we humans do exactly the same thing 😂

I'm starting to think that these massive, one pass, tokenised models will become redundant very quickly.

We are already seeing small, multi-pass models beating the large parameter models by simply iterating their thinking.

I really hope this turns out to be true. I’m opposed to the idea that “scale is all you need”, rather, I believe that “innovation / research are all you need.”

The concern I have is that the scaling strategy can still be applied to multi-pass models, which would likely outperform smaller ones. This not only increases training costs, but also makes inference more expensive due to the need for multiple actions.

That said, I’m not very familiar with these types of architectures, so I’d be happy to read any material you’d recommend.