Replying to Avatar rabble

What’s acceptable is subjective, but generally not entirely individual. We tend to listen to our peers a lot to learn how we feel about things.

I was talking to a friend who at the time was a member of parliament for the Pirate Party in Iceland, and he said, jokingly, that he wanted to past a law that forced Facebook to allow full nudity in Facebook for users in Iceland. And ban anything to do with guns or violence, including stuff from Hollywood which wasn’t real.

This wasn’t serious. It was a thought experiment. Why did Icelandic users, in Iceland, talking to other people in Iceland, have to conform to the content standards of a California corporation?

If Germany can pass a law banning nazi symbols and companies all over the world enforce it upon users in Germany, why shouldn’t other countries so something similar. What about forcing companies to allow content they’d otherwise block?

Anyway, on a more real level, for nostr, I think users should be able to self declare content warnings for individual posts or their entire feed. And it should be done with a tagging system, I believe Tumblr does this, so I decide I don’t agree with Icelandic social norms and don’t want to see nudity or sexual content. The Icelandic nostriches (is that what we’re called?) might not self declare warnings. So we clearly need to be able label other people’s posts or feeds.

I think apps should then only trust those labels based on the social graph. They shouldn’t display a random person’s content warning, but if it’s somebody you follow, then you probably trust them enough to use their warnings.

I agree. Going back over a decade there have been multiple attempts to classify various types of content certain people didn't like. The problem with most of them was they were based on specific cultural norms. "Inappropriate for someone under 18 years of age" means very different things in Finland and Saudi Arabia. The classification systems that were possibly capable of being culturally neutral were too complicated to implement.

This isn't a new problem, and what history has taught is is that there's no simple solution. Germany requires age verification for all adult content (but only for .de domains). The UK tried to figure out age verification and failed (multiple times). Louisiana just passed age verification requirements - but there are major privacy implications to the law. Now France is talking about it, and the list goes on and on.

The solution I'd like to see is for the same IETF draft standard that Apple (and soon Google) use for Private Access Tokens (which confirm the user is human) be used to say whether parental controls are in place on the device (and possibly what types of parental controls - nudity, sex, violence, etc.) Then websites can filter their content based on that data. The same IETF standard could be used to verify age if the states set up "mediator" services (a term defined in the standard). But parents would need to do minimal parenting to make sure the parental controls are in place.

https://www.ietf.org/archive/id/draft-private-access-tokens-01.html

I don't know how any of the anti-porn / age verification laws are going to deal with Nostr. It's a type of chaos and ambiguity they're not prepared to deal with. I can just see some clueless politician saying "We need to subpoena the CEO of Nostr to appear in front of our committee for questioning!" And then being completely confused that something so big has no corporate structure.

Reply to this note

Please Login to reply.

Discussion

So true. Although a classic problem, I believe WE now have the ability to re-think all of this. It really applies to everyone also, especially as decentralization grows. nostr.build may get accused of spreading certain info, but how about Google, AWS, etc that empower nostr.build?!? Lol

Rethink, yes, but there's much to learn from the history of what's gone before.

The law has different rules for different types of adult material. If you're commercially publishing the material, there's one set of standards (documentation of age and consent). If you're an amateur uploading for personal reasons (e.g. dating) there's another set of rules. And if you're hosting random user-generated content there's yet another set of rules.

Ultimately @npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 , as the host, bears the legal responsibility for the content. They're going to get VERY familiar with DMCA take-down requests. The proposed changes to Section 230 will be very important to them.

Sorry, for talking about you in the 3rd person. For some reason I thought I was responding to @npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240

Uggh…

I can recommend some very good first amendment lawyers, if you feel you need advice…

Corey Silverstein - http://porn.law

Larry Walters - https://www.firstamendment.com

But you're really more in the position of someone like Twitter or Facebook - a host that doesn't editorially approve what goes on their server. You wouldn't meet the 1/3rd threshold that some of the recent laws have put in place at which point age verification is necessary.

For the porn content - quite a bit of it can be legal. (Provided Congress doesn't change Section 230, and the GOP doesn't redefine "obscenity"). If you're not interested in hosting it, I am. And I'm sure others in the adult industry would also be interested.

I found the classification system that was most advanced in it's day - ICRA…

https://web.archive.org/web/20080622002259/http://www.icra.org/vocabulary/

It was discontinued and the website taken down because not enough sites adopted it. Luckily ChatGPT remembered the name and headed me to archive.org to find it.

If we could get feeds with "sensitive" content to classify themselves by even the broad categories in ICRA, that would be huge. They are:

- Nudity

- Sexual Material

- Violence

- Coarse Language

- Potentially Harmful Activities

There are many items under each… For example you can specify "Erotica" under Sexual Material to indicate your content is only soft-core.

And then they have the idea of "Context" which is also really useful in some cases…

- Artistic

- Educational

- Medical

- Sports

- News

IMHO, that's where any classification system should start. Just pick up with what was best from years ago and build on it. This time maybe it can be implemented more simply.