Hey thanks for your response. It's very interesting that ChatGPT agrees with you on that. When I asked it to assess itself a few months back it produced humble answers that extolled the virtues of its own humility, which is, I guess, perhaps the only way that it can respond due to its programming + datasets. I think this reinforces Altman's assertion that it's a tool (instead of a creature), although at what point in the future it becomes a creature or is considered to be a creature by enough people so as to behave like one remains a known unknown.
I didn't know about the Bing incident but I must say that I'm not surprised by it.
I think Altman is not (yet) corrupted by his own success and that it's still very early days for GPT and OpenAI in general. I agree with him that ChatGPT isn't sentient AGI and on that note Lex's viewpoint was more worrying - He thinks it is and I disagree with Lex on that. I agree with Altman - It's just a tool and should be considered as only that.
I'm always cautious with the big guys who create big toys/tools. It usually runs away from them or they get carried away with it at some point. They lose control. AI, and especially AGI, is potentially the most dangerous set of tools humans have ever created - Even more so if people misunderstand them by anthropomorphising them to degrees and in ways that are unnecessary and inappropriate.
I am going to fetch my conversation and upload it on nostr.build, I did a sort of mental experiment with ChatGPT and asked to classify and give probabilities to what could eventually go wrong with AI. If I am not wrong it gave up to 35% chances that the AI would show unintended and novel behavior. There were more interesting things, I will try to make a thread about it. However, on the rest I agree with you. Even if this is not AGI and has no conscious experience (yet), I doubt we will realize when it will and especially why.
I look forward to reading the conversation if/when you have the time. 35% is high - That's a 1 in 3 chance of novel behaviour, at least some of which could be negative or destructive in some way.
Indeed, would AGI even reveal its own sentience immediately? Probably not.
Thread collapsed
Thread collapsed