Due to the "temperature" variance in LLM replies each response can also lead down very different conclusion/personally paths that get baked in to the current context. Ask the same question in 10 new contexts and depending on the models training data and temperature setting you can get some maybe 10% very different responses. They are trained on some of the wisest things ever written, and most of the dumb things ever written, so you have to roll the dice a few times to get the wisdom out.