Yeah, the context window needs to be wayore flexible.
Discussion
Messages are the context window, but that's not what I mean. Models have different "personalities" – DeepSeek will say different things than Claude. If you continue a Claude conversation in DeepSeek, it will know what was "said", and obviously it won't continue as Claude, but it also won't continue as "DeepSeek who said those things", because DeepSeek would never have said those things to begin with. You end up with a new conversation that knows the context but doesn't "agree" with it. I expect merged conversations, even using the same model, will likely reflect this to an extent. You'll get some value, but a merge will never be "clean".
Due to the "temperature" variance in LLM replies each response can also lead down very different conclusion/personally paths that get baked in to the current context. Ask the same question in 10 new contexts and depending on the models training data and temperature setting you can get some maybe 10% very different responses. They are trained on some of the wisest things ever written, and most of the dumb things ever written, so you have to roll the dice a few times to get the wisdom out.