We need client apps to support content warnings, and better support of reporting content and being able to set content warnings for content you don’t post that you can choose to have your client use. This all needs to be opt-in by relays and clients, but we need this if Nostr will grow beyond bitcoiners. nostr:note1x3nqfuy7neezhpqpuge0duemuzrmr8vkkprjcwkphkrzm2ljk2mq4stcce
Discussion
We need client apps to support POW, and better support of adding POW to content and being able to set POW warnings for content you don’t post that you can choose to have your client use. This all needs to be opt-in by relays and clients, but we need this if Nostr will grow beyond bitcoiners.
Seriously guys, I'm super serious.
Disagree. Nostr isn’t going to become the next Twitter or Facebook and attempts at making it so are futile. The social element will tend towards small communities who will police content themselves and for the rest of the capabilities of nostr, the idea of content warnings is just ridiculous - it’s a neutral protocol.
Would rather see effort go towards group messaging apps around private relays and better handling of identity, haven’t once thought “oh we need better content warnings”.
I’m reminded of Tyler’s wise words when people complain about content they don’t like: 
I’d say the CIA has the handle on anything called cyber bullying, no doubt .. but there is a thing called ignoring people who are dicks..
Yes, dickheads are going to be dickheads and no amount of content moderation tooling or image tagging is going to solve that.
The internet made it 20 years without this stuff and ever since it’s been introduced online discourse has been 100X worse. Almost as if these systems get weaponised by bad actors when there is power to be had..
I think Fear and hate were certainly weaponized against us indeed. I think some are still afraid of thier shadows and other people.
You can already block people as well as label notes as objectionable on most clients. Are you proposing something to prevent the content from even being seen? Doubt that will ever happen, nor should it.
You can read Rabble's & my approach to content moderation here…
https://s3x.social/nostr-content-moderation
We have a pull request proposing "NIP-69" as a first step in that process, and have a more general "issue" open on Github discussing the topic more generally. (The links for both are in the link above).
The basic vision is "bottom up" content moderation where communities can moderate what people in their community sees via interconnecting webs of trust relationships. The end user will pick the people/organizations/bots they trust to have a voice in moderating their feed and only reports by those people will be factored in when the user's feed is filtered by a client app.
So for example, I could mark "exosome" as someone to block and he will then go on the block list of everyone who has designated me as trusted moderator of their feed. And he could go on the block list for relays that trust me as a moderator.
These discussions are good, I just thought of how the whole model can be extended…
Everyone is talking about how the client apps will eventually support algorithms that filter users' feeds - pushing some content up in the feed and pushing other content down in the feed. This model can factor into that… So the same events that are used to block content can be used to promote content if they had the ability to endorse as well as suggesting blocks and warnings.
For example, if a fellow Nazi really liked "exosome" he could endorse him and people who trust him would see more of exosome's posts. And to extend things further a user could have reverse trusts lists that basically say "do the opposite of whatever this person says to do" . The possibilities are actually rather interesting.
Never forget that those prone to subversion will dominate by sucking up to get moderator status for their cuase, their bots, and their paid actors.
This is a difficult topic that sould be debated first on the grounds of free and equal access, then on what grows and keeps the baby safe.
What I’d like to see is everyone _can_ have moderator status. You just have to get people to designate you as one of their moderators.
That’s equitable, no?
Maybe the paid relay is the gated community you want to live in?
The last truly rotten content I was seeing was via fediverse NoAgendaSocial and I just closed my account and moved to NOSTR. I'm not paying for any relays but I'm also not investing too much time here as I notice my posts don't get very far.
Some of you have deeper relay coverage so I think you are more good and bad. Maybe a baseball rating too.
I'm the end usually I realize I'm getting over sensitive and I need to go read a book or go for a walk or build something and be around live humans. Fortunately my $Fiat job is building piers, boat lifts, driving piles and marine construction so I play on the rivers and ocean.
I think management of content is so granular that it's a science which AI is full of promise and less of profit. As the yoga peeps say, we have 9 gates into the body and each gate is meant to be guarded. I don't even watch modern "entertainment" anymore for the very reason that it's just evil.
Happiness is a warm NOSTR, relay - relay, woot woot!