Jet fuel for the Great Weirding :)
Probably no apocalyptic foom, but we're just getting started integrating LLMs into feedback loops/agent models. It turns out that language token prediction is a really good trick for dealing with fuzzy inputs/world models, far beyond just generating text. Text alone is pretty disruptive to work, but agents that can be delegated tasks will be more so, I think.