Replying to Avatar Rizful.com

#asknostr among the problems that Nostr faces, the child porn problem is a very, very, very bad problem.

A VERY bad problem.

What is the current thinking among developers about how to deal with this?

Nobody likes censorship, but the only solution I can think of (SO FAR) is running an image identification service that labels dangerous stuff like this, and then broadcasts a list of (images, notes, users?) who are scoring high on the "oh shit this is child porn" metric. Typically these systems just output a float between zero and 1, which is the score....

Is anyone working on this currently?

I have a good deal of experience of running ML services like image identification at scale, so this could be something interesting to work on for the community. (I also have a lot GPU power, and anyway, if you do it right, this actually doesn't take a ton of GPUs to do even for millions of images per day....)

It would seem straightforward to subscribe to all the nostr image uploaders, generate a score with 100 being "definite child porn" and 1 being "not child porn", and then broadcast maybe events of some kind to relays with this "opinion" about the image/media?

Maybe someone from the major clients like nostr:npub1yzvxlwp7wawed5vgefwfmugvumtp8c8t0etk3g8sky4n0ndvyxesnxrf8q or #coracle or nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg or nostr:npub18m76awca3y37hkvuneavuw6pjj4525fw90necxmadrvjg0sdy6qsngq955 has a suggestion on how this should be done.

One way or another, this has to be done. 99.99% percent of normies, the first time they see child porn on #nostr ... if they see it once, they'll never come back.....

Is there an appropriate NIP to look at? nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 ? nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft ? nostr:npub16c0nh3dnadzqpm76uctf5hqhe2lny344zsmpm6feee9p5rdxaa9q586nvr ?

Relays have to become more whitelisted and less open, and clients have to implement outbox model and stop relying on 2 or 3 big relays, then we can just stop worrying about this.

Reply to this note

Please Login to reply.

Discussion

If you have a server that anyone is free to write to on the internet this kind of stuff will always happen. The obvious solution is to not have this kind of server.

You can also have this kind of server but disallow links. That will probably go a long way too.

The obvious solution is to just give up working on it and spend the next few years smoking weed and going snowboarding instead. But the actual solution I think has to be some kind of distributed scoring system.

I drafted a spec to auto block/show/ask for content previews from specific domain providers. Clients can automatically block previews from a domain, or let the users decide on what to do with it. You can aggregate these lists from different users into a rating system for domains as well. Would appreciate any feedback.

```

{

"kind": 10099,

"content": "",

"tags": [

["d", "domain_lists"], // identifier

["white", "nostr.build"],

["white", "void.cat"],

["black", "malicious-site.net"],

["black", "scam-domain.com"]

["unknown", "ask"] // Options: "load" | "block" | "ask"

]

}

```

https://github.com/limina1/nips/blob/extend56/56.md

I just don’t know if a “domain blacklist” will

work well. I think it will be too slow, too incomplete, and ineffective. I think the only way to do this at scale is that relays have a way to score images and videos and simply be sure to delete and not re-broadcast any which get a bad score.

Wonderful that some arse decided to drop some pics in this particular feed. Charming content, brought to me by the public Einundzwanzig relay.

Not sure if you are serious or just trolling the idea. But -- like each individual relay implements its own scoring system? Seems like a ton of duplicated effort.

I am not trolling.

I do think it would be good to have a system for identifying harmful stuff. It would be a nice workaround that would work today and I would definitely adopt it at https://njump.me/ because we keep getting reports from Cloudflare. I tried some things but they didn't work very well, so if you know how to do it I'm interested.

However the long-term solution is paid relays, community relays, relays that only give access to friends of friends of friends, that kind of stuff.

so why do we even need nostr then?

we have mastodon

Because Nostr isn't written in Ruby.

OK, so thinking about it more, in light of what nostr:npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p says ... 1) Obviously the spec to use would be the LABEL spec nip-32 -- not sure why I didn't figure that out to begin with... https://github.com/nostr-protocol/nips/blob/master/32.md 2) My original idea of "publicly publish a score for each image" is completely impossible and terrible idea... because, of course, the bad guys could actually just use the service in the reverse way that it's intended to be used! ....... Anyway, 1/2 of the problem -- running a service which produces scores -- is completely something I could do -- basically process millions of images and spit out scores for them -- but the other 1/2 ... how to let clients or relays use these scores WITHOUT also giving them a "map to all the bad stuff" at the same time...? I'm not smart enough currently to come up with a solution. It might involve something fancy involving cryptography or "zero knowledge proofs" or things that are generally out of my intellectual league.

> Relays have to become more whitelisted and less open

No.

And then everyone runs a personal relay (I'll take care of making that trivially easy for people) and everything is perfect!