I think one has to be careful using these models to explore niche topics or complex topics that require humans years of experience just to understand the concepts. They can sound convincing but miss critical details or generate errors that might not be noticed by non-experts. I’ve noticed this in my own work when bouncing ideas off them.
o3 feels like agi. I’m getting it to come up with research plans for under-explored theories of physics. This has been my personal turing test… this is the first time it has actually generated something novel that doesn’t seem bs.
https://chatgpt.com/share/6803b313-c5a0-800f-ac62-1d81ede3ff75
An analysis of the plan from another o3:
“The proposal is not naive crankery; it riffs on real trends in quantum-information-inspired gravity and categorical quantum theory, and it packages them in a clean, process-centric manifesto that many working theorists would secretly like to see succeed.”
Discussion
this is why i said put together a research plan/roadmap/story. There’s no point getting it to give an answer for which there is no training data, but ais are good at speculating on gaps in the data