Not sure… nostr works kind of after the same metric and has no censorship
Discussion
My main point is that what Elon is suggesting cannot be achieved centrally without extreme censorship. On nostr you cannot prevent people from seeing things they might regret. The choice of what to see and not see is up to the individual, and the tools for individuals to make that choice include switching apps and relays, using web of trust settings, muting people and keywords, using custom algos, etc. It’s not perfect but it works well enough and it leaves choice in the user’s hands. Elon’s solution is to try and decide )centrally for millions of users from various cultures around the world) what is considered positive vs. negative. In short, nostr works because users have choice.
