I am not trolling.

I do think it would be good to have a system for identifying harmful stuff. It would be a nice workaround that would work today and I would definitely adopt it at https://njump.me/ because we keep getting reports from Cloudflare. I tried some things but they didn't work very well, so if you know how to do it I'm interested.

However the long-term solution is paid relays, community relays, relays that only give access to friends of friends of friends, that kind of stuff.

Reply to this note

Please Login to reply.

Discussion

so why do we even need nostr then?

we have mastodon

Because Nostr isn't written in Ruby.

OK, so thinking about it more, in light of what nostr:npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p says ... 1) Obviously the spec to use would be the LABEL spec nip-32 -- not sure why I didn't figure that out to begin with... https://github.com/nostr-protocol/nips/blob/master/32.md 2) My original idea of "publicly publish a score for each image" is completely impossible and terrible idea... because, of course, the bad guys could actually just use the service in the reverse way that it's intended to be used! ....... Anyway, 1/2 of the problem -- running a service which produces scores -- is completely something I could do -- basically process millions of images and spit out scores for them -- but the other 1/2 ... how to let clients or relays use these scores WITHOUT also giving them a "map to all the bad stuff" at the same time...? I'm not smart enough currently to come up with a solution. It might involve something fancy involving cryptography or "zero knowledge proofs" or things that are generally out of my intellectual league.