Are relays already using AI to reject grave content like cp, threats of violence, etc?

Would be nice to see an official badge/certification for relays that do so users like myself can steer clear of dark content. Seems important for #Nostr to go mainstream.

Furthermore, I would happily use an AI filter on the client side to filter out hyper sexual content and even auto ignore users that match a certain criteria.

AI should in theory give us the tools to remove/reject/hide content without reducing user sovereignty or creating a class of moderator gods.

Reply to this note

Please Login to reply.

Discussion

This is totally crucial

While I don't feel relays should be required to censor or be responsible for the content of notes that clients publish, I do think there are users who will want relays that provide content moderation.

I don't follow relays closely but I don't think there is anything like that right now.

Rejecting child porn is a feature that users will self select for in a free market of relays.

I think this is correct.

Clients such as Amethyst will block images by default but I think that may only happen if the publisher tags the content as potentially offensive. It would be better if users could select relays that attempt to block illegal content and even better if clients had some intelligence to provide another layer of protection.

the best part of this post is that “moderator gods” is uncapitalized. You are the real deal man. Respect.

Thought someone might.

That and WOT looks like a decent combo