You can pay to subscribe to indexers that proxy all relays and run custom algos for you on your feed. Relays should not censor.
Discussion
Disagree. I have my own relay and I can do anything I want with it. Nobody has to use my relay if they don't like it, and there are thousands to choose from.
Seems more a need for relays to be transparent in what they allow and remove/block. Most relays are not even configured with a corresponding website. So only via NIP-11 can know the info on relay.
I like the idea of indexer options but you also canāt really force any one person to not moderate their own relay because you can be breaking the law with certain types of content.
If anything, if I ran a relay, I would probably want to subscribe to some moderation algo that guarantees censoring certain illegal content so im not thrown in jail for being willfully ignorant.
Yes, I think this issue is being overlooked by a lot of people.
I think we need to add things to the protocol that make it easier for users and relay operators to choose what content they want to host, promote, and see. This isnāt about censoring as in taking away peopleās freedom to publish, rather providing people the freedom from having to see or host content they donāt want to.
Thatās why I helped put together the proposed NIP-69, go give us a set of tags so we can easily tag, categorize, and filter content. It applies to both reported and self tagging for content warnings. It does not say what people should do with the content, just makes it easier to make choices and take action.
Running a church relay? Donāt host any content that is in the Porn category.
I see thanks for sharing. Good discussion there.
Now does this NIP handle abuse of labeling (eg. using your own puritan view to label something as bad itās really not that terrible)?
People may end up reporting anything they disagree with or donāt like, even if itās legal and fine by others.
So it doesnāt say what should be DONE about the labeling. Either as a report or content warning. The relays and users through the client apps will still need to decide what to do about it.
For example, in Nos, weāre thinking of putting a click to reveal if there is a content warning that was added either by the original publisher or somebody who you follow or who they follow, the same way we scope our ādiscoverā tab. Eventually users could choose automatic settings, say, never hide everything reported spam or phishing, keep porn something you have to tap to reveal, and show everything else.
Weāll have some things which are constrained by the app stores, for example, if the user is a minor, we donāt show sexual content and donāt let the user choose. And weāll keep a copy of the Child Sexual Abuse reports on some relay we run and need to do some checking on that, and block it for everybody. We donāt want to expose our users to legal jeopardy for having pedophilia on their devices.
Again, other relay operators and clients and users are free to make different choices. Host different kinds of content, see different kinds of content.
Makes sense.
I think blurring or collapsing reported content and showing the report reason and text, is a very sensible approach. Then let the user decide whether they want to see it or not.
And something that occurred to me just now: make sure itās possible to block reporters as well, to prevent report-spam.
Sounds like a right approach to me
We ( #[7]) are working to have a means of at least being legal where we have geographic nodes. We're in the process of determining our ToS and need to find a happy medium of keeping legal and preserving free speech. Right now we depend on relay users reporting to us. Which is not an effective means.