Replying to Avatar s3x_jay

I think this is one thing where the relays need to do the filtering and just assume they're gonna get sent absolutely everything.

To take an example - successful OnlyFans models are good at marketing. When they hear about "blaster relays" that rebroadcast to the top 100 relays they're gonna be all over that. Being professionals you might get them to hit a classification button (G, PG, R, X type of thing) before posting. That could warn the relay of what's in the content, but they absolutely will try to get as much reach on every post as they can. OnlyFans might be where they make the sale, but it's not where people hear about them. Getting as much reach as possible is literally dollars in their pocket.

Then there's Joe Schmoe who just really likes sharing explicit content for whatever (non-commercial) reason. I don't think you'll ever get guys like him to label their content - he's not a professional - not even a pretend professional.

Bottom line - if you're expecting people to send different types of content via different relays - that effort would be better put towards having them classify the content before sending since they're literally classifying it (at least mentally) to figure out which relays to send to.

So far the only Nostr client I've seen that does a decent job allowing you to switch between accounts is Hamstr.to - and I'm not sure it's status - if I'm looking at the correct GitHub page - it hasn't had any activity in 2+ months. So you can't really tell people to have multiple accounts if the clients can't handle it. Plus - some people just won't have multiple accounts. I'm sometimes shocked at what my younger friends mix on a single social media account.

Either implementation would require changes in how the client sends out notes, the amount of effort (programming wise) is the same.

The primary benefit of the relay method however is it means clients can comply without falling afoul of Apple and Google rules. Most people use this on mobile remember.

If all relays can provide nsfw content, devs will be forced to block the nsfw event type completely.

If the feature is instead a neutral relay selector, well that's perfectly innocent...

You make a strong case wrt OF models. But then I'd also argue if you encourage pros only you narrow down your audience significantly because your relay or classifier or whatever just becomes an ad channel.

A lot of people are sick of FetLife now for exactly this reason. There's not much community element anymore. Most of the active women on there are just shilling their OF pages.

I assume it works if they keep doing it, but with Nostr you'd have to opt in to nsfw content in some shape or form. If after doing so your feed is just flooded with OF ads, I'd put it to you most people will just turn it off again.

After all Pornhub is free, Literotica is free, LLMs trained to act like kinky AI gf's are free...

Reply to this note

Please Login to reply.

Discussion

I think we need take a step back and reiterate that people won't do what you want them to do. So the question is which system is more adaptable?

If you tell people to filter which relays they send to - they won't. Then the relays are left to have to filter the content to conform to their relay ethos (for lack of a better word). But the only input in that approach is at the time of posting. I'm not seeing how your approach recovers from the original poster doing things "wrong" - which they will do countless times per day.

If you have a content tagging/classification system it can happen at many points in the process. The original poster can classify it. People who see it can classify it. Bots who watch the streams can classify it. Yes, the relay has to do some work, but they had to do that same work anyway for all the people who didn't do their post "correctly" - and in what you're proposing they have to figure it out on their own with no meaningful input. With a content classification system there are many data points and a lot more information to help the relay make the decision that's best for them.

As far as Apple and Google app store standards - I can pull up A LOT of porn on my Twitter and Telegram clients. So those standards may be more flexible than you think. It's kinda for the app developers to thread that needle given the precedents that already exist. I'm guessing what's important is that there be some type of filtering used and that it not be total shit. With both Twitter and Telegram Apple's fine with "the user turned off filtering sensitive content". I'm literally proposing the same thing - only the user would get to define more precisely what they want blocked - rather than just living with some corporate standard they may not agree with.

With Louisiana passing age verification for adult content, and Utah and Arkansas passing age verification and parental consent for ALL social media - this will be coming to a head sooner rather than later. I spoke with one of the top free speech lawyers in the nation a couple weeks ago about something that had a lot of the same elements. I wouldn't say that lawyer and I are completely on the same page - it's a complicated, nuanced topic and it will take time to fully understand each other's perspective. (I'm trying to understand his legal perspective and apply it. He's trying to understand some of the technical ideas I'm proposing.) But some of the other things that have been proposed here are simply non-starters - they're discriminatory and are based on hegemonic norms that are anything but culturally neutral.

The difference between what you're proposing and Telegram and Twitter is that neither one has a "show me nsfw" button which is effectively what you're asking for. Any app that explicitly has a setting that says "show me naughty stuff" (as defined by the notorious prudes at Apple) will be booted from the App Store.

In fact with Telegram I know for a fact the iOS app blocks channels for sharing "adult content" and there's no option in the iOS app to turn that filter off. If you have the Telegram app on your computer (not the Mac App Store version, that's gimped too) you can disable it from there and it applies to your whole account on all devices including iOS, so they've sort of snuck around it, but notice how there's no option to enable nsfw content in Telegram on versions available from the App Store.

In fact I just checked and the same is true for Telegram on Android nowadays too. On the desktop app the setting is under privacy and security > disable filtering. This setting doesn't exist on the mobile apps.

So you see my point. It's not that I have any sort of moral objection - as I just alluded to, I use FetLife, I'm in the BDSM scene, so porn doesn't exactly bother me - but from a practical standpoint you'll find that any setting explicitly designed to allow nsfw content will be forcibly removed from the mobile apps by Apple and Google.

While there's likely to be workarounds like there is for Telegram, it's not an ideal UX. The best option is a feature that appears "innocent" because then the gatekeepers won't have much of a leg to stand on by demanding a feature to filter what relays you send something to be removed.

As for those laws, they will be unenforceable on Nostr. Simple as that. Since this is the US we're talking about I wonder if they'll end up getting thrown out due to 1A as well. I'm pretty sure a strong case can be made.

Unfortunately censorship and monitoring under the guise of "protecting the children" won't go away, so technology has to evolve to make such Orwellian bullshit unenforceable.

Luckily it already has!

"We can code faster than you can regulate."