There are authorities that are actively hunting for the source of these images, globally. As much as I hate the content, the best thing we can do if we want to let those authorities find the kids and the perps is to flag the user account and the content as such and let those hunters do their jobs. That's not ignoring the problem and hoping it goes away, it's specifically identifying the profiles and accounts to that the metadata on the images can remain intact and available. I'm sure there will be AI solutions on the corporate servers, but I don't want those on Nostr. Report, Mute, Move on. That's all we can do unless we are the hunters.

Reply to this note

Please Login to reply.

Discussion

All agreed except for one possible improvement -- what if it were possible to run an automated service that could proactively look for these images and somehow publish a score in a "safe" way -- a score that could only be used to PREVENT clients from being shown the note with the bad image... but could never be used to SEARCH for such images....

I would not mind a relay based service that acts as a hunter of images like that and provides reports to the proper authorities. But I wouldn't want something like that bogging down the base level protocol and slowing things down for everyone on the planet.

The "authorities" are completely useless and not really doing anything. Just letting you know.

Right, well our job is to report it and find a "safe" and "fair" way to block it.

Oh, I'm fully aware. But that doesn't mean we shouldn't brainstorm about how the ideal world should work.

Thanks for your great effort to find a solution of the problem. 🙄