The fact that Geoffrey Hinton, of all people, has gone from a CBS interview a few months ago talking about how great AI is for everyone to full AI doomer in a short period of time is probably something worth updating your priors a bit on. https://youtu.be/FAbsoxQtUwM

Reply to this note

Please Login to reply.

Discussion

What does updating your priors mean?

Looks to me like the people who spoke to Sesn Penn have also spoken to the godfather.

Do you think it’s just developing too fast at this point?

I saw a note the other day (can’t remember from who) about how not enough people are speaking of our ability to adjust to AI’s integration in our lives and actually becoming a more productive and an overall better society because of it. Mainly because it can complete some tasks for us, giving us more time to focus on other tasks and projects.

It appears we are in a period of exponential improvement. I was pretty dismissive of the potential of large language models (LLMs) a few months ago, and echoed a lot of the criticisms of Chomsky, Marcus, Kahn, etc. suggesting that LLMs were a dead-end and nothing more than a gimmick that would not and could not be the basis for AGI.

But since I made those arguments, there have been mind-blowing breakthroughs on multi-modal models, GPT-4 has demonstrated the ability to learn how to use tools and employ them in tasks (AutoGPT does this today!). These capabilities were detailed even further in the Microsoft Research “Sparks of AGI” paper, suggesting there’s emergent properties in this models that defy our understanding.

I’ve watched a lot of my confident dismissals of the peak potential of the technology get washed away with advance after advance, that seems to be coming faster and faster, on a literal daily basis.

My concern has only risen. It has not abated.

Bitcoin n chill

AI n Stress