This is a good change from OpenAI:
"We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected."
Essentially, if the model notices the user is having a mental health challenge, it should slow down and use the best reasoning model, just as it would for other hard problems. There will also be new parental controls.
It’s a shame this wasn’t in place earlier. I’m still glad it’s being prioritized now. https://openai.com/index/building-more-helpful-chatgpt-experiences-for-everyone/