Replying to Avatar Illiquid Snake

I am going to fetch my conversation and upload it on nostr.build, I did a sort of mental experiment with ChatGPT and asked to classify and give probabilities to what could eventually go wrong with AI. If I am not wrong it gave up to 35% chances that the AI would show unintended and novel behavior. There were more interesting things, I will try to make a thread about it. However, on the rest I agree with you. Even if this is not AGI and has no conscious experience (yet), I doubt we will realize when it will and especially why.

66
66a6db0e... 2y ago

I look forward to reading the conversation if/when you have the time. 35% is high - That's a 1 in 3 chance of novel behaviour, at least some of which could be negative or destructive in some way.

Indeed, would AGI even reveal its own sentience immediately? Probably not.

Reply to this note

Please Login to reply.

Discussion

No replies yet.