Replying to Avatar rabble

I think nostr:npub1f4faufazfl4cf43c893wx5ppnlqfu4evy2shc4p9gknfe56mqelsc5f7ql is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned.

Saying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content.

The solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform.

That’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable. nostr:note1r5exg2e9zg6uwl4al4sqh874m0j0h9kuqh6749hdwpx5jlt2udyql0ndh3

Was at human rights foundation. In principle activists loved nostr, but after seeing lack of moderation and level of chat on Damus they abandoned.

A moderation service could just be a multiplexer of relays. Single relays could moderate and multiplexer could add another layer

Reply to this note

Please Login to reply.

Discussion

Client could ask few questions on first install to choose right multiplexer regime

I’d venture to say they would have had a different experience if their first nostr experience was nostr:npub1pu3vqm4vzqpxsnhuc684dp2qaq6z69sf65yte4p39spcucv5lzmqswtfch. 😁

I'm amythyst, and from what I heard its more vanilla than Damus. Dependent on relays and their moderation policies, app stores could warn about inappropriate content. So relay choice will matter.

I might add "safeguarding practices/approaches" to nostr.com, and try building a resource list for client developers

What use case did this foundation have in mind?

It’s an oxymoron to me, that human rights activists want some 3rd party authority to censor them.

Maybe I’ve misunderstood?

You have. On safeguarding for young people for example, it's important they can use nostr clients without being exposed to inappropriate content. We in fact have the tools and permissionless development environment to make safer spaces than centralised services.

This is super interesting to me.

One version that I can envision is a build or app that is organized around a particular audience or use case with situationally appropriate relays and algorithm options.

Nettiqette was a good start, but this will require a deft balance of intent and engineering.

Of course there would always be the option for a fully open, unadulterated installation of people chose that.

I don't know the specifics there, but I doubt the issue is wanting "some 3rd party to censor them." Rather they want to use the platform in a manner that does not result in them being harassed/spammed. Moderation is very rarely about "censorship," and often about community building/community norms.

Yes