It would be interesting to have an explicit *type* of response that was basically a flag for NSFW (or something similar) and you could see how many people marked it as such or maybe there could just be a threshold where relays would treat it differently if tagged a certain way by a specific number of people…

In fact, I guess you could do this pretty easily with badges could you not? 🤔 #[7]​ #[8]

Reply to this note

Please Login to reply.

Discussion

Yep beautiful. Needs to be implemented by all the clients I guess

I like the idea of self-reporting.

Community reporting of others' notes could be gamed by a bunch of bots that have been set to target a certain user.

With that said, a check by the community isn't a bad thing to protect users that want it.

I seem to remember that a tweet could be obscured with the message that this tweet has been reported to contain sensitive content for some users. Tweet wasn't deleted, but you had to click through.

You could have clients have a toggle to turn that feature on/off.

If we think a couple years out and Nostr powers a good bulk of social media. Classrooms use it with their own relays, etc.

It'd be great to have some protection built in for minors that will be using it.

I'm not really crazy about this as people could dogpile on content they don't like just by reporting it into oblivion, and with no limit to how many sock puppet accounts you can spin up this solution would be highly exploitable and opposite of "censorship resistant"