I often wonder if they are training the models to extend the converstion into using more tokens than necesaary. Why give you the answer for $1 when they can stretch it out to cost $5.
I feel like I notice a trend of side tracking and an odd lack of problem solving when the solutions come two or three prompts later.