When nostr:npub1r0rs5q2gk0e3dk3nlc7gnu378ec6cnlenqp8a3cjhyzu6f8k5sgs4sq9ac I come up with a lot of ideas he forgot to tell you about the "relay alert" to trigger a consensus vote across the network (relay operators) when unacceptable content spreads across the protocol.
Yes, the child stuff kind of content or anything else posing a risk to the wellbeing of any nostr user aka an imminent threat to people's life.
We will need such a mechanism at one point or we'll risk sacrificing the censorship resistance of the protocol with a rushed solution such a small group gaining moderation / ban powers or developers and relay operators banning content case by case.
There is also the risk of state staged attacks on the network if nostr catches on, onboards over 250 million people. It's easy to flood the content with inappropriate content only to then point the finger and shout "look, only podophiles use nostr".
We have to be ready for these kind of things so we should start working on it now so that we're ready when the time has come.
1. algorithms to filter out content which can be enabled in-app by users to customise their nostr experience including "family friendly feeds".
2. An alert mechanism. If certain contents appears on the network the app who detects that content first would trigger the beforementioned "relay alert" prompting ALL other relays to participate in a consensus vote to ban the relays and related users from the network. At the same time the relay operators my decide if the information of the relays should be forwarded to law enforcement. Again, one of the worst case scenarios, children stuff. There is no excuse to protect that kind of content or those who spread and consume that kind of content.