That's a good approach, but could be tricky on nostr. Maybe you could scrape NIP 56 reports? I know those still get published by some clients (which is awful, but I can't convince the devs to stop).
Here's an example: https://huggingface.co/Falconsai/nsfw_image_detection ... putting one of these (or actually multiple, and averaging the results...) behind an API endpoint is not too difficult, and I'd be happy to do it for any service which has a **way to measure the effectiveness** ... since I will not be reviewing any images manually (!) , and YOU will not be reviewing any images manually (1) and I will be deleting all data a few milliseconds after it hits the model and returns a score, you must have SOME way of deciding if the service is useful. Like, user complaints, or blocks, or something like that.... ideally if you run a big enough service where you can measure "complaints/blocks per day" and see that the "number goes down" when you start using the scores that I provide.
As discussed in this thread, making these scores public is potentially dangerous, but providing a service that simply scores images, especially if that service is only offered to a small number of entities who can be trusted to use it only to help them delete something .... is something Microsoft has been doing for decades, I can't see any particular risk in it.
But I only want to build this if someone can say "yes, I'll be able to measure the effectiveness somehow"... because doing this without measurement of any kind of useless, right?
Discussion
It's not awful.
It's illegal
Awful is very different from illegal.
Ok, but legality has bearing on awfulness
You don't have to worry about definitions. These models are very smart and are happy to provide you with a float between zero and one. And then you just set a threshold on the what scores you will tolerate. No need to engage further with the question.
Yes, more than often what is awful is legal and what is good is illegal.