Replying to Avatar Constant

Hello Nostr, if you are in a great mood just skip this post; its depressing.

So I had not encountered it before, but yesterday I crossed paths with Child Sexual Abuse Material on Nostr. In my regular internet usage over the years I have rarely come across this stuff, though I guess if I were to look for it I would find it eventually;

That is to say, the status quo is that it does exist, but most people most of the time wont have to deal with it. I think this is important to realize that the world is not perfect as it is, when reflecting on these matters in the context of Nostr.

It goes without saying, but just to be clear: yes I think we should all learn how to tie nooses and identify adequate oak trees.

However marginalized CSAM is, some people want governments to go above and beyond to combat it. Prime example currently is the ‘Chat control’ regulation proposed out of the EU, which wants to install bigbrother client side on your phone to scan every single thing you do in order to flag any suspicious behavior/content, before it gets encrypted. How understandable the motivation might be, even advocacy groups and agencies dealing with the CSAM problem are against this type of stuff, if not just simply because they are already swamped with work/processing of material as it is; opening the floodgates with false positives wont help anything and probably make the situation worse. Aside from the obvious objections to forcibly installing big brother on peoples hardware of course.

Back to Nostr. On the one hand we have the end-user, that does not want to get confronted by this material. From this perspective, CSAM is just one of the many things a user might want to filter out, along with other material that might not be illegal per se but just NSFW etc. Whatever means we find to do this, failure by those mechanisms to do so is bad, unwanted etc. but not a direct systemic risk to Nostr; like I mentioned in the beginning, it is not impossible to accidentally come across this type of stuff on the internet today as is, and the whole world is still using it.

But it does become a systemic issue from the relay perspective. Here, it is not some incidental bad experience that can be clicked away. It is a crime to host this type of material which brings in the risk of prosecution for ‘simply running a relay’ that some asshole decided to nuke with CSAM or other illegal material.

But here my optimism comes in. Nostr is pro censorship; the theory is that every relay can moderate to their hearts content, because users are ultimately always able to route around such obstacles (very much like ‘the internet’ itself). This means that that relays should be able to adjust their policies and methods of moderation to their capacity to deal with unwanted content and risk appetite. From a locked down white-list only relay on one side of the spectrum, all the way to an open relay with heavy sophisticated analytics for assessment and filtering, and everything in between: albeit that it wont deliver us a perfect solution in all cases, it will remove the dark cloud of systemic risk to the protocol/network, because we are able to sufficiently marginalize the phenomena.

On a last note: when talking about filtering/assessing for this content it gets complicated really quickly. You can imagine some AI performing such a task, or using lists of known content to filter; however you want to do it, you first come to the question on how you construct that stuff in the first place; it requires gathering such content and human eyes looking at it. And then subsequently you produce tooling that can be flipped around and used as a search engine to seek and find such material instead of filtering it away. So yeah, there are no graceful perfect solutions I am afraid.

Well, there is one of course….

https://cdn.satellite.earth/a92bdd80dbd45e00636a9db615061eef168c3164a0e1bfa1abfb0784e74cd24e.mp3

Sadly, this is a very complex issue not only for nostr but for the entirety of the internet and offline. First off, thank you for sharing and bringing this up because I had the same questions and posted it on here. Second, I strongly believe that concerned users like us would play a massive part too to combat this type of content better than any AI or ML. I actively report contents like this and I came across it few hours ago. I always make an effort to report these things. Like you said, it is in the best interests of any hosting owner to make sure they do not host "illegal" content such as CSAM. Atm, I have no better answers or one solution that fits all the scenario. But I have faith that collectively we use our own discernment to report this kind of content. Ty! ❤

Reply to this note

Please Login to reply.

Discussion

No replies yet.