Using AI, cloudflare?? How is that OK from privacy POV?
We take this very seriously using a combination of AI, Cloudflare and manual filters. If anything is found it is immediately removed, the user’s acct is blocked from us, and all evidence reported to NCMEC.
We also provide moderation headers that classifies adult and violent content, I believe nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s is planning to implement. This would help if something temporarily slips by.
nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg is adding us as a media host over the next couple of months. I will make sure we bring up this type of csam moderation so they can apply it to their service.
Discussion
We only scan and take action or data from csam related content, everything else will just pass through, no data is recorded.
Well the worry I have is that software stack that scans for CSAM can easily be repurposed to scan for anything else. Also CSAM scanning has proven to give way too high false positives sometimes resulting in very bad consequences for the poster.
The sad truth is that all media people post on the internet, anywhere, is scanned by many different systems… We try to be as minimally invasive as possible and provide as much privacy as possible, but there is no way around scanning for CSAM…
A larger worry is embedded AI on proprietary devices coming fast that effectively sees and parses EVERYTHING that is interacted with on those devices. This would be the ultimate in client side scanning with no way to independently verify who receives or can hack into this information or what governments could demand as to access in the future.
I want a Jarvis like true digital assistant as much as (or more than) the next nerd. But only on open systems that I can rationally trust.
We also remove all location based metadata from regular uploaded content that further helps privacy.