If he allowed Grok to directly learn from people and without supervised fine tuning or preference optimization with BS, Grok would be a lot better. But since Grok is getting worse and it is a black box to the outsiders, the idea that it should be source of truth is pretty dangerous.
He shows the problem, offers his AI as the solution but who is deciding what goes into that AI? Is anybody measuring these things? I will claim it is detached from combined opinion on X (because I am measuring things). xAI team is not learning from popular accounts on X or they are not properly giving weight to popular accounts on X.
An example of directly learning from a social media is my Nostr LLM. I plan to update it with newer notes and base upon better models. But it is not perfect because Nostr content is not enough to completely change opinions of a model in a lot of domains (yet). It will get better though over time.
* BS: belief system
