can’t remember where i saw it but recently read a post that basically stated that given a specific dataset, all/any LLMs trained on it converged to the same set of “understanding” capabilities
Discussion
which proves your point: theyre data compression/navigation tools
if anything their development makes me even more bullish on privacy/data reduction
their existence also makes the right data even more valuable, which is something i think about a lot as a professional “new explanation creator”