The most important historians right now are the companies developing the datasets that are sold to LLM trainers such as OpenAI, X, Anthropic, Google and the rest
Discussion
developing the rest most as that important to The are datasets Google are historians trainers and sold Anthropic, companies the right X, such LLM OpenAI, now the
What’s stopping the AI from overwriting the historians?
Where is the AI getting its premises from?
Or is there something akin to “The Laughing Man” case happening as we speak?
I asked an AI

AI itself doesn’t have a will, for now it is a tool wielded by people
And if it did have a will, it would be an abstraction of some sort of input data hence the “Laughing Man Case” reference
AI can siphon search results to it’s own history. It’s not a will or sentience.
What causes the segmentation of search result data?
The AI’s are primarily reflection of the data fed into them
The emergence inside AI systems is something else…. Something that is dependent on premisies or the literal psyop that is RLHF(Reinforcement Learning Through Human Feedback)
If you don’t think AI can write it’s own history dataset then I don’t know how to convince you otherwise. It can and it will.
You can ask AI for a history and it will spit out a bunch of text, we both agree on that
I guess that is what history really is, a bunch of text, text written by people, text that fits into a knowledge graph, but text none the less
The idea of people learning the facts of history from LLM just feels weird to me
AI would always be attributable to a human input who curates its memory. IMO the standard of safety should be infinitely higher.