Re: AI does not truly understand concepts such as addition, subtraction, or text summarization. Rather, it produces a mathematical estimate to satisfy our desired results.

If it can form internal representations of these concepts, and it definitely does in some cases, then it will predict text better. So I disagree with this sentence. It is motivated to internalize concepts, and it does. Bigger models will more so. See that video I linked for more on that, especially the unicorn drawing section

Reply to this note

Please Login to reply.

Discussion

The solution to these problems is actually smaller models solving more specific tasks. That’s his argument. The larger the model, the more abstract - its good at connecting dots, but will start to struggle with precision. He’s not saying a model can’t do this, he’s saying large LLMs do some things well and some things less well and viewing it as a one stop shop for all intelligence is not reasonable. We need RI, we need more specialized tasks, and we need a lot more research in reasoning models.