Yeah I was reading some of his responses in the thread. I think it’s an urgent matter for them to discuss. Isn’t there like a dev group chat or meeting for these types of issues?

Reply to this note

Please Login to reply.

Discussion

Agreed. #damus uses nostr:npub1nxy4qpqnld6kmpphjykvx2lqwvxmuxluddwjamm4nc29ds3elyzsm5avr7 for our image hosting and they have a very thorough process for finding and removing/block this content. It’s very advanced and works well. Hopefully other image hosts follow this…in terms of places they discuss…not sure

Primal needs to get their shit together. They’re making the whole protocol look bad.

Yeah but they won't change shit if no one raises a fuss. Honestly, they should be boycotted at this point because they're one of the main ones allowing this shit through. We're all always talking about "vote with your wallet" but no one is doing it. Primal is the one being pushed for all newcomers for onboarding. That should change until they get themselves straight.

I think a lot of people have no idea and if they did know, would probably wanna go with another client

We take this very seriously using a combination of AI, Cloudflare and manual filters. If anything is found it is immediately removed, the user’s acct is blocked from us, and all evidence reported to NCMEC.

We also provide moderation headers that classifies adult and violent content, I believe nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s is planning to implement. This would help if something temporarily slips by.

nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg is adding us as a media host over the next couple of months. I will make sure we bring up this type of csam moderation so they can apply it to their service.

You guys do awesome work. Thank you so much, seriously.

Personally, after having to deal with this shit, a meat grinder would be too good for these sick fucks..

Do you think some of it is posted by feds?

No idea.. I would hope that they aren’t that twisted..

it's a real problem for Pentagon employees there has even been bipartisan bills to stop feds from having so much cp

I think some of the "keeping you safe" stuff may be spying by or for the feds or at least makes it easier for hackers, feds and sundry busybodies.

Yup. Thankfully, the worst I’ve seen is the AI versions, but even hearing about the existence makes me sick

Your project is doing an amazing job!

We don’t want that sick shit in our feed. Hope nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg fixes this soon. It’s a common onboarding app and that kind of shit happening will only fuck with people and with their app.

Using AI, cloudflare?? How is that OK from privacy POV?

We only scan and take action or data from csam related content, everything else will just pass through, no data is recorded.

Well the worry I have is that software stack that scans for CSAM can easily be repurposed to scan for anything else. Also CSAM scanning has proven to give way too high false positives sometimes resulting in very bad consequences for the poster.

The sad truth is that all media people post on the internet, anywhere, is scanned by many different systems… We try to be as minimally invasive as possible and provide as much privacy as possible, but there is no way around scanning for CSAM…

A larger worry is embedded AI on proprietary devices coming fast that effectively sees and parses EVERYTHING that is interacted with on those devices. This would be the ultimate in client side scanning with no way to independently verify who receives or can hack into this information or what governments could demand as to access in the future.

I want a Jarvis like true digital assistant as much as (or more than) the next nerd. But only on open systems that I can rationally trust.

We also remove all location based metadata from regular uploaded content that further helps privacy.

have you considered volunteer moderators? how would that play out if it’s possible?

We have, and at one time had volunteers helping, but didn’t work out so well because most didn’t do anything after a while, and also we didn’t want to be liable for their mental trauma.. I don’t want to out that on anyone..

The AI model we use works pretty well for now. We also use a service called PhotoDNA that works well.