Replying to Avatar Rizful.com

#asknostr among the problems that Nostr faces, the child porn problem is a very, very, very bad problem.

A VERY bad problem.

What is the current thinking among developers about how to deal with this?

Nobody likes censorship, but the only solution I can think of (SO FAR) is running an image identification service that labels dangerous stuff like this, and then broadcasts a list of (images, notes, users?) who are scoring high on the "oh shit this is child porn" metric. Typically these systems just output a float between zero and 1, which is the score....

Is anyone working on this currently?

I have a good deal of experience of running ML services like image identification at scale, so this could be something interesting to work on for the community. (I also have a lot GPU power, and anyway, if you do it right, this actually doesn't take a ton of GPUs to do even for millions of images per day....)

It would seem straightforward to subscribe to all the nostr image uploaders, generate a score with 100 being "definite child porn" and 1 being "not child porn", and then broadcast maybe events of some kind to relays with this "opinion" about the image/media?

Maybe someone from the major clients like nostr:npub1yzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesnxrf8q or #coracle or nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg or nostr:npub18m76awca3y37hkvuneavuw6pjj4525fw90necxmadrvjg0sdy6qsngq955 has a suggestion on how this should be done.

One way or another, this has to be done. 99.99% percent of normies, the first time they see child porn on #nostr ... if they see it once, they'll never come back.....

Is there an appropriate NIP to look at? nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 ? nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft ? nostr:npub16c0nh3dnadzqpm76uctf5hqhe2lny344zsmpm6feee9p5rdxaa9q586nvr ?

I don't have images displayed by default and I don't have any way to tell (other than the person I'm following) whether to display the image or not. Once I see it I cannot unsee it.

I think encouraging self-publication of honest text descriptions of non-text content is the way to go. I recently encountered someone who did not wish to publish a content warning so my recourse was to mute that person. If we repeat this process continuously, where both clients and relays can try to evaluate whether a description of media is honest or not, and block dishonest descriptions, that would go a long way.

In other words, use classification systems not to identify one particular type of content on a scale from 0.0 to 1.0, but rather have it judge whether the attached description of an image or video is honest on a scale from 0.0 to 1.0. Then everybody can block dishonest npubs and filter what they wish to see or not see based on descriptions. If an image or video URL does not have a description alongside it, score it 0.0. I don't know which NIPs this would use or add. And the classification systems would not be required to be used by anyone, but they might help identify dishonest sources.

Reply to this note

Please Login to reply.

Discussion

TLDR: encourage honest text captions.

This is basically a war against illegal and dangerous content. You can’t just nicely ask the other side to politely label their weaponry.

The very purpose of a censorship-resistant communications network is to permit some forms of "illegal and dangerous" content because "illegal" varies with time and jurisdiction and "dangerous" varies with time and culture. Nostr doesn't host non-text media, websites do. I am not trying to minimize the real issue at hand. My suggestion is not about nicely and politely. I suggest we build incentives into clients and relays for people to honestly label their content. Those who do not label at all can be easily exiled. Even posts without a label can be easily blocked. Those who label dishonestly can be (admittedly less easily) exiled.