The solution to these problems is actually smaller models solving more specific tasks. That’s his argument. The larger the model, the more abstract - its good at connecting dots, but will start to struggle with precision. He’s not saying a model can’t do this, he’s saying large LLMs do some things well and some things less well and viewing it as a one stop shop for all intelligence is not reasonable. We need RI, we need more specialized tasks, and we need a lot more research in reasoning models.