I think nostr:npub1f4faufazfl4cf43c893wx5ppnlqfu4evy2shc4p9gknfe56mqelsc5f7ql is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned.
Saying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content.
The solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform.
That’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable. nostr:note1r5exg2e9zg6uwl4al4sqh874m0j0h9kuqh6749hdwpx5jlt2udyql0ndh3
