Replying to Avatar rabble

I think nostr:npub1f4faufazfl4cf43c893wx5ppnlqfu4evy2shc4p9gknfe56mqelsc5f7ql is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned.

Saying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content.

The solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform.

That’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable. nostr:note1r5exg2e9zg6uwl4al4sqh874m0j0h9kuqh6749hdwpx5jlt2udyql0ndh3

I think the solution is to give users an option to put a paywall for new communications, i.e. comments by strangers in your threads and private messages from strangers. That is sufficient to discourage any kind of spam or unwanted information. If I follow someone or write them first, that means I agree to receive dickpic from them. What's the deal? Just unsubscribe and it's done. No need to introduce centralised censorship, just introduce optional payment to write in my threads or my DMs. I want 100 satoshis from anyone new writing to me in DM or writing comments under my posts. Just give me 100 satoshis and I agree to receive spam or porn and immediately block such users, collecting 100 sats from them.

Reply to this note

Please Login to reply.

Discussion

That's a potential solution to spam (though probably not a very effective one). But it's not a solution to targeted abuse about someone. Not all of the problems in an unmoderated space are about when you are name checked by an abusive user.

Nor does your solution deal with things such as child sexual abuse material, which is illegal.