I think we need take a step back and reiterate that people won't do what you want them to do. So the question is which system is more adaptable?

If you tell people to filter which relays they send to - they won't. Then the relays are left to have to filter the content to conform to their relay ethos (for lack of a better word). But the only input in that approach is at the time of posting. I'm not seeing how your approach recovers from the original poster doing things "wrong" - which they will do countless times per day.

If you have a content tagging/classification system it can happen at many points in the process. The original poster can classify it. People who see it can classify it. Bots who watch the streams can classify it. Yes, the relay has to do some work, but they had to do that same work anyway for all the people who didn't do their post "correctly" - and in what you're proposing they have to figure it out on their own with no meaningful input. With a content classification system there are many data points and a lot more information to help the relay make the decision that's best for them.

As far as Apple and Google app store standards - I can pull up A LOT of porn on my Twitter and Telegram clients. So those standards may be more flexible than you think. It's kinda for the app developers to thread that needle given the precedents that already exist. I'm guessing what's important is that there be some type of filtering used and that it not be total shit. With both Twitter and Telegram Apple's fine with "the user turned off filtering sensitive content". I'm literally proposing the same thing - only the user would get to define more precisely what they want blocked - rather than just living with some corporate standard they may not agree with.

With Louisiana passing age verification for adult content, and Utah and Arkansas passing age verification and parental consent for ALL social media - this will be coming to a head sooner rather than later. I spoke with one of the top free speech lawyers in the nation a couple weeks ago about something that had a lot of the same elements. I wouldn't say that lawyer and I are completely on the same page - it's a complicated, nuanced topic and it will take time to fully understand each other's perspective. (I'm trying to understand his legal perspective and apply it. He's trying to understand some of the technical ideas I'm proposing.) But some of the other things that have been proposed here are simply non-starters - they're discriminatory and are based on hegemonic norms that are anything but culturally neutral.

Reply to this note

Please Login to reply.

Discussion

The difference between what you're proposing and Telegram and Twitter is that neither one has a "show me nsfw" button which is effectively what you're asking for. Any app that explicitly has a setting that says "show me naughty stuff" (as defined by the notorious prudes at Apple) will be booted from the App Store.

In fact with Telegram I know for a fact the iOS app blocks channels for sharing "adult content" and there's no option in the iOS app to turn that filter off. If you have the Telegram app on your computer (not the Mac App Store version, that's gimped too) you can disable it from there and it applies to your whole account on all devices including iOS, so they've sort of snuck around it, but notice how there's no option to enable nsfw content in Telegram on versions available from the App Store.

In fact I just checked and the same is true for Telegram on Android nowadays too. On the desktop app the setting is under privacy and security > disable filtering. This setting doesn't exist on the mobile apps.

So you see my point. It's not that I have any sort of moral objection - as I just alluded to, I use FetLife, I'm in the BDSM scene, so porn doesn't exactly bother me - but from a practical standpoint you'll find that any setting explicitly designed to allow nsfw content will be forcibly removed from the mobile apps by Apple and Google.

While there's likely to be workarounds like there is for Telegram, it's not an ideal UX. The best option is a feature that appears "innocent" because then the gatekeepers won't have much of a leg to stand on by demanding a feature to filter what relays you send something to be removed.

As for those laws, they will be unenforceable on Nostr. Simple as that. Since this is the US we're talking about I wonder if they'll end up getting thrown out due to 1A as well. I'm pretty sure a strong case can be made.

Unfortunately censorship and monitoring under the guise of "protecting the children" won't go away, so technology has to evolve to make such Orwellian bullshit unenforceable.

Luckily it already has!

"We can code faster than you can regulate."