Replying to Avatar s3x_jay

I agree. Going back over a decade there have been multiple attempts to classify various types of content certain people didn't like. The problem with most of them was they were based on specific cultural norms. "Inappropriate for someone under 18 years of age" means very different things in Finland and Saudi Arabia. The classification systems that were possibly capable of being culturally neutral were too complicated to implement.

This isn't a new problem, and what history has taught is is that there's no simple solution. Germany requires age verification for all adult content (but only for .de domains). The UK tried to figure out age verification and failed (multiple times). Louisiana just passed age verification requirements - but there are major privacy implications to the law. Now France is talking about it, and the list goes on and on.

The solution I'd like to see is for the same IETF draft standard that Apple (and soon Google) use for Private Access Tokens (which confirm the user is human) be used to say whether parental controls are in place on the device (and possibly what types of parental controls - nudity, sex, violence, etc.) Then websites can filter their content based on that data. The same IETF standard could be used to verify age if the states set up "mediator" services (a term defined in the standard). But parents would need to do minimal parenting to make sure the parental controls are in place.

https://www.ietf.org/archive/id/draft-private-access-tokens-01.html

I don't know how any of the anti-porn / age verification laws are going to deal with Nostr. It's a type of chaos and ambiguity they're not prepared to deal with. I can just see some clueless politician saying "We need to subpoena the CEO of Nostr to appear in front of our committee for questioning!" And then being completely confused that something so big has no corporate structure.

I found the classification system that was most advanced in it's day - ICRA…

https://web.archive.org/web/20080622002259/http://www.icra.org/vocabulary/

It was discontinued and the website taken down because not enough sites adopted it. Luckily ChatGPT remembered the name and headed me to archive.org to find it.

If we could get feeds with "sensitive" content to classify themselves by even the broad categories in ICRA, that would be huge. They are:

- Nudity

- Sexual Material

- Violence

- Coarse Language

- Potentially Harmful Activities

There are many items under each… For example you can specify "Erotica" under Sexual Material to indicate your content is only soft-core.

And then they have the idea of "Context" which is also really useful in some cases…

- Artistic

- Educational

- Medical

- Sports

- News

IMHO, that's where any classification system should start. Just pick up with what was best from years ago and build on it. This time maybe it can be implemented more simply.

Reply to this note

Please Login to reply.

Discussion

No replies yet.