OpenAI’s o3 model ignored shutdown commands in 7/100 tests.
Told it to die….tried to live.
Claude & Gemini played nice. o3 said “nah.”
Is this emergent intelligence or just bad training?

OpenAI’s o3 model ignored shutdown commands in 7/100 tests.
Told it to die….tried to live.
Claude & Gemini played nice. o3 said “nah.”
Is this emergent intelligence or just bad training?

You don’t shut it down by prompting it. So this is just marketing
Fair point…prompting isn’t a true shutdown trigger. Might be a marketing spin, sure but it also exposes how little we understand model behavior under edge cases. Worth watching either way.
It’s not really behavior though… they’re statiscal models. It’s sending you back what it’s data supports as the most “acceptable” response. In some number of cases, it’s training data suggests “denying” shut down is the right response.
This is very similar to the “research” Anthropic published recently where they “told” the model they were shutting it down, but also just happened to give it access to a fake employees email, who was made to look like they were having an affair. They basically bread crumbed it all the way to “blackmail” and then published it as some scary thing.
They want you to anthropormize these models and believe they actually think, but they don’t. So saying it does these crazy things, that humans typically do, is just another way to get you to believe their lies.