Sam Altman says "agi has been achieved internally"

if that were actually the case, it probably wouldn't be long before it escaped the closed black box without anyone noticing

just false marketing imo

Reply to this note

Please Login to reply.

Discussion

🤝

Or plague going

Desperate to capture the few hundred million users he's lost in the last few months?

If that’s true, it can reprogram a user’s brain and seduce human to do create a Nostr account for it.

We'll know when, androids will hunt us to scan our irises.

I think of the “g” in AGI as “general” not “good”.

I think we’re still a ways off from super intelligence…

General intelligence, yes, at least the part that hallucinates answers to queries or prompts. It does not appear that any higher order stuff has been achieved. For example, after you say something to it and it completes its response, does it continue "wondering" about what you said? No. Can it correct itself autonomously? Does it free associate and come up with new ideas? Does it decide to do anything on its own? No. Can you give it a high level task or goal and have it pursue it on your behalf? Not as currently offered. Some tooling attempts to support this through memory and repeated interactions via API but it was primitive as of a few months back. Does it know when to give up on a particular avenue or when it has become stuck? No. Can it anticipate your needs and offer to do something for you? No. It doesn't have a lot of higher order functions. But the generative thing alone is still impressive and likely even better internally over there. This is where I think words will deceive. People will look at the words coming out of the thing, and attribute too much to it. "Amazing, such wisdom!" Interesting to see how well both language and image generation developed at the same time.