That's a good reason, but if we haven't been able to eliminate this type of content on centralized platforms until today, I believe that on a decentralized protocol we won't be able to. But yeah, I wish my relays didn't feed me that content. But I want transparency, I want to know what content is censored and a way to check.
Discussion
here’s the thing…
even if child sex trafficking wasn’t a multi billion dollar industry (which makes me want to go on a Rambo outing), the FBI would plant child porn on the network at some point. Or “terrorist communications” to justify whatever they wanted to do.
Nostr is a sitting duck until it solves for these things. Unfortunately, short sighted soy devs who also want to censor the world for their fefes, make up the majority of the “moderation” crowd. So thinking through how to deal with this in a new way has to happen.
Either easy way of dealing with this, censorship or removal of anonymity, are the things the corrupt State wants. And as soon as the door is open, the Feds will come in.
The problem comes from dealing with it using centralized power. So the question becomes how can it be dealt with on the individual level without changing relay’s simplicity, or making all clients censorship tools of a corrupt state, and groups of crying soy devs?
How does one censor content without censoring content?
It’s a real question…
I think we see the problem the same way. Im open to more ideas!
we are all looking for ideas.
it concerns me that most solutions aren’t simple. and the temptation for hero position seeking behavior around this is strong.
I’ve always been partial to Slashdot’s mod point system and you can surf at whatever level you want.
I’m not talking about optional levels of engagement. if that were the discussion, i wouldn’t be engaging in this conversation.
My concern is purely about illegal content, and how that could be used by the censorship industrial complex to trojan horse nostr.
The best way to mitigate that is to come up with a system that handles that problem without offering the Feds an attack surface they can use in other ways. Which is no small feat.
It just dawned on me that keeping media delivery separate is the key.
clients could incorporate external media delivery services like nostr.build, and those services would bear the brunt of dealing with illegal content. If people then used external links to bypass media delivery systems, this is something outside of the control of nostr, and outside of client developers responsibility. That is the purview of the Feds.
If then, a media delivery service got weird and started censoring for reasons outside of illegality, they could be replaced, or bypassed by posting external links, preventing full centralized censorship.
If a client starts moving past this, and centralizes censorship for arbitrary reasons, they can be replaced.