It appears we are in a period of exponential improvement. I was pretty dismissive of the potential of large language models (LLMs) a few months ago, and echoed a lot of the criticisms of Chomsky, Marcus, Kahn, etc. suggesting that LLMs were a dead-end and nothing more than a gimmick that would not and could not be the basis for AGI.

But since I made those arguments, there have been mind-blowing breakthroughs on multi-modal models, GPT-4 has demonstrated the ability to learn how to use tools and employ them in tasks (AutoGPT does this today!). These capabilities were detailed even further in the Microsoft Research “Sparks of AGI” paper, suggesting there’s emergent properties in this models that defy our understanding.

I’ve watched a lot of my confident dismissals of the peak potential of the technology get washed away with advance after advance, that seems to be coming faster and faster, on a literal daily basis.

My concern has only risen. It has not abated.

Reply to this note

Please Login to reply.

Discussion

No replies yet.