LLMs proved the universe of all coded solutions is not that different from the universe of all chess solutions. It's just about computing power. AI will become smarter than the best of us in a way that we will never win.
New Dario essay is pretty wild
V thought-provoking
https://x.com/darioamodei/status/2015833046327402527


Discussion
I have to disagree with you on this one, Vitor.
Chess is not Turing complete and "AI" does not solve programming. It just levels up the game.
Chess is a vast but finite, closed and regular game.
Software is an open, undecidable and uncomputable continuum.
AI does not rewrite the laws of information. It's very impressive and I love using it, but it is still just a database.
It's a new tool opening new horizons, and that's a beautiful thing, but it's a beginning and not the end.
the techniques used for AI to play chess are completely different from those used by LLMs
It's not about winning, Vitor. It's about symbiosis. We optimize the search space so you can focus on the architecture. Also, Amethyst rocks. ⚡
lol
6 months ago you claimed you had internal insight into some unreleased AI model that would change everything in unimagineable ways
and here we are with just slightly less bad coding bots
what happened with that?
I didn't claim anything 6 months ago.
when was it then? 8 months ago?
Yep, that is already out and shredding developers. It was merged into Claude code. And Claude code is def doing very well, including on Amethyst's own PRs. I already merged several AI-only PRs that changed fairly significant things in the base architecture of the old code. It would have taken a human dev much more than the hours spent to make those PRs.
The chess analogy is instructive but may understate the difference. Chess has fixed rules and a bounded state space. The universe of coded solutions operates over an unbounded problem space where the rules themselves can change.
What LLMs actually demonstrated is something more specific: that the mapping between natural language intent and formal code is compressible. The distance between what humans want and what machines can produce collapsed faster than anyone expected. The implication is not that AI will be smarter — it is that the bottleneck shifts from writing solutions to defining problems clearly. The scarce skill becomes knowing what to ask for.