Hi praxeologist, good question. We use an AI model designed to flag and report CSAM. If it is not 90% sure, we will manually review the media. CSAM is removed and reported to the NCMEC. The account is then blocked from using our service.
We also filter media that suggests anything inappropriate with a child including cartoons, adult clothing, positions, etc. We have recently stopped allowing free adult porn (intercourse) uploads.
We have experimented with the Cloudflare filter and Microsoft PhotoDNA, but find the AI model is most accurate.
More info can be found here:
https://nostr.build/features/
Thank you for the information. I have a few follow-up questions regarding the AI model you mentioned.
Was the model developed in-house, or is it based on existing models?
If it is based on existing models, could you specify which ones were used?
Can you provide more details on the training process and the data used to train the model?
I appreciate your help in understanding this better!
Thread collapsed