Two problems with what you're proposing…

1) How do the relays know what to moderate? Someone needs to report it, which (in the world of Nostr) means there's a public record of it. As soon as it exists, clients will use it to moderate (as Amethyst does now).

2) How do people who have different views talk to each other if they're on different relays? The whole point of Nostr is that you can have a lovely, weird mix of content based on who you follow and what relays you use.

Reply to this note

Please Login to reply.

Discussion

I should add a couple things since this thread just got pushed by nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6

For starters my vision is that…

- Anyone can be a moderator

- The user chooses what, if any filtering happens to their feed by choosing their own moderators.

It's fully described here…

https://s3x.social/nostr-content-moderation

This is done through two new NIPs…

"NIP-68" defines a generic process to put any type of label on content - at the time it's originally posted, or after it's been posted.

"NIP-69" defines how that labeling process can be used for the purpose of content moderation…

The latest versions of both NIPs were put into the PR by nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 last night. You can see them here…

https://github.com/nostr-protocol/nips/pull/457/commits/dd967e52211e6245a3c4db9998b31069cb2b628e

That said, today I added an important section about how those labels will get used which hasn't gotten added to the PR yet. You can see the full NIP-69 here…

https://github.com/s3x-jay/nostr-nips/blob/master/69.md

For 1, based on your past expression that the intent is to manage illegal content - then technically all content on your relay needs to be audited/moderated. Moderation doesn’t mean taking action - just that a layer of decision making sits above content. If you instead are seeking a content flagging mechanism to help you highlight content as higher priority for moderation review - that is a different problem and use case. You don’t need Nostr to do that - you can just use a HTTP request to your server. You are seeking centralised reporting (specific to your relay) in that case. Other relays don’t match your relays local laws or concerns.

Individual relays can do whatever they want. You could have a whitelist to be able to post to a channels, or even an approval queue before the relay will show that event in queries. Ideally they have a NIP with the NIP value as a supported nip.

You can build a moderated Reddit style subreddit environment for a relay or even a group of relays. You can shadow ban identities and events. You can hide posts or comments. You can limit publishing events to certain channels or groups only - or whitelist. You can even delete events from the relay - even without a delete event. Or even using a delete event, however with a whitelisted pubkey that’s allowed to delete content of other pubkeys from your relay - basically an invalid event for other relays so they can reject it.

That’s all centralised moderation. By all means, build a modern reddit or twitter - where people become power hungry, over moderate, censor what they dislike, and build an in-group culture that can’t think for itself unless it’s the same-think.

For 2, the answer is simple, join at least one relay in common. This isn’t a problem specific for your use case. It’s literally a Nostr architectural gap - two disconnected networks literally are silos and cannot see each other - at least without some replication or rebroadcasting. Or to communicate with someone directly you look up their 10002 event and connect to their publish=true relays - either on demand or add one or more of them to share common relays. And to message them, you publish to their read=true relays. And if you are seeing to solve identity or content discovery - that’s also an open development area and has nothing to do with moderation as a requirement.

The best part, if I don’t like their #Bitcoin posts, I add a client app filter and hide that. Simple.