"all other models that use Chain-of-Thought prompting failed to produce even a single valid solution."
These findings validate the notion that making models reason in a much more representative latent space is better than making them talk to themselves via [Chain of Thought]
Your Next ‘Large’ Language Model Might Not Be Large After All | Towards Data Science https://share.google/mL9lwS3W0d5oIDVLs
"all other models that use Chain-of-Thought prompting failed to produce even a single valid solution."