I think nostr:npub1f4faufazfl4cf43c893wx5ppnlqfu4evy2shc4p9gknfe56mqelsc5f7ql is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned.

Saying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content.

The solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform.

That’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable. nostr:note1r5exg2e9zg6uwl4al4sqh874m0j0h9kuqh6749hdwpx5jlt2udyql0ndh3

Reply to this note

Please Login to reply.

Discussion

It has to be opt-in

Likewise for algos

Nostr has the flexibility to cater to different sensibilities

Yes. No need to reinvent the wheel, the internet has plenty of history here to draw from. The interesting question is what specific solutions can provide various degrees of moderation and choice. I think there’s a potential history making approach brewing here. Or at least the opportunity.

Was at human rights foundation. In principle activists loved nostr, but after seeing lack of moderation and level of chat on Damus they abandoned.

A moderation service could just be a multiplexer of relays. Single relays could moderate and multiplexer could add another layer

Client could ask few questions on first install to choose right multiplexer regime

I’d venture to say they would have had a different experience if their first nostr experience was nostr:npub1pu3vqm4vzqpxsnhuc684dp2qaq6z69sf65yte4p39spcucv5lzmqswtfch. 😁

I'm amythyst, and from what I heard its more vanilla than Damus. Dependent on relays and their moderation policies, app stores could warn about inappropriate content. So relay choice will matter.

I might add "safeguarding practices/approaches" to nostr.com, and try building a resource list for client developers

What use case did this foundation have in mind?

It’s an oxymoron to me, that human rights activists want some 3rd party authority to censor them.

Maybe I’ve misunderstood?

You have. On safeguarding for young people for example, it's important they can use nostr clients without being exposed to inappropriate content. We in fact have the tools and permissionless development environment to make safer spaces than centralised services.

This is super interesting to me.

One version that I can envision is a build or app that is organized around a particular audience or use case with situationally appropriate relays and algorithm options.

Nettiqette was a good start, but this will require a deft balance of intent and engineering.

Of course there would always be the option for a fully open, unadulterated installation of people chose that.

I don't know the specifics there, but I doubt the issue is wanting "some 3rd party to censor them." Rather they want to use the platform in a manner that does not result in them being harassed/spammed. Moderation is very rarely about "censorship," and often about community building/community norms.

Yes

Yes.

I think the solution is to give users an option to put a paywall for new communications, i.e. comments by strangers in your threads and private messages from strangers. That is sufficient to discourage any kind of spam or unwanted information. If I follow someone or write them first, that means I agree to receive dickpic from them. What's the deal? Just unsubscribe and it's done. No need to introduce centralised censorship, just introduce optional payment to write in my threads or my DMs. I want 100 satoshis from anyone new writing to me in DM or writing comments under my posts. Just give me 100 satoshis and I agree to receive spam or porn and immediately block such users, collecting 100 sats from them.

That's a potential solution to spam (though probably not a very effective one). But it's not a solution to targeted abuse about someone. Not all of the problems in an unmoderated space are about when you are name checked by an abusive user.

Nor does your solution deal with things such as child sexual abuse material, which is illegal.