It's great seeing devs get serious about trying to block illegal content of the worst kinds.

That said, I can't help but fear that the tools built to combat this type of content that virtually everyone agrees should be eradicated will one day be used to censor other content that should be protected as free speech. How can we build tools to censor heinous and despicable content like CSAM, while not simultaneously creating tools that could destroy Nostr's free-speech assurances for other forms of content?

Reply to this note

Please Login to reply.

Discussion

First, I am not a coder in ANY respect. But I don't see how that would be possible. I agree that despicable content should not be allowed. But if you create tools to combat it, then it will be able to be used for censorship as well.

Any tool, once created, can be used for good, or evil. But, like I said. Not a coder. 🤷

Perhaps lists is the answer. Make it easy to follow someone elses list of follows. At least to onboard. If someone has a good starting point there is no real need to go into the global feed.

Just stay out of the cest pool. Be able to select a list of hashtag follows or interesting subjects.

Not sure if this fits into the conversation you are starting, but may work well for onboarding and ignoring unwanted content.

So, there are already a ton of ways to keep most garbage out of your feed that you don't want to see. Spammers and CSAM stuff is now being posted to popular hashtags and replies to folks posting in the Introductions tag, which is a much harder problem to solve for.

Moreover, relay runners and media hosts like nostr.build have a legal responsibility to report and delete such content. However, any tool built for finding and deleting CSAM can also be adapted for, say, finding and deleting any content promoting Bitcoin, or speaking negatively about the CCP, etc.

Most folks will say, "Not a problem. Just run your own relay that doesn't censor that content, or find a public relay that won't censor you." That's all well and good, and the devs recognize that blocking anything at the relay level is an exercise in futility, because there will always be a relay willing to not censor it.

As a result, though, they are looking for ways to block the content at the client level. Ways to have an image checked for CSAM by an AI before displaying that image to you. Sounds absolutely wonderful! Something I would absolutely want for blocking such content.

That said, the same tool could then be used to identify content speaking ill of the CCP and block it at the client level, so you don't see it regardless of what relays you have running.

The only saving grace we have here is that it would be very unlikely that every client would use these tools for the purpose of blocking content speaking ill of the CCP, even if they would and should implement it for blocking CSAM.

Nevertheless, clients could become a major point of failure for censorship resistance with such tools.

Don't use global, rely on WoT, require payment for access and storage, publically flag and report content, and realize that bad things will happen under true freedom. Freedom is for enemies, no practical way to avoid that imo.

That's been generally my stance as well. Bitcoin is going to be used, and has been used, for all kinds of illegal activity. It would be the wrong course of action to try to find a way to keep transactions for certain illegal purposes from being added to a block, or to keep them from being able to route through Lightning.

For Bitcoin's permissionless and censorship resistant qualities to be preserved, even the worst uses for money have to be impossible to block.

Go after the people committing the crimes, rather than trying to block them from using money to commit them.

It seems to me that for Nostr to remain truly permissionless and censorship resistant, the same must be able to be said here, too.

The difference is that other people don't have to see what bad actors are using their Bitcoin for, but random users absolutely can stumble upon the most disturbing illegal content on Nostr.

So we definitely need tools that allow users to only see the content they actually want to see, such as the ones you mentioned.

The main tradeoff to consider and/or break out of comes down to accepting some data silos versus getting content discovery and facing censorship. The solutions that I mentioned are simple and probably get most of the job done, but they also generally put new users at a disadvantage in terms of being discovered (even with outbox support since relays would be more "locked down") and make it harder for existing users to discover new content. On the other side, we would have great tools to find new stuff and discover new communities, but might have to face off against a lot of pressure to shut down relays or feed everything into a data collection machine in order to monitor content, and that's definitely not a ideal.

The good thing is that, as users, we have the ability to run our own relays and decide our own content fetching/storage/moderation policies, and that's ultimately why I think Nostr-based applications and infrastructure will win out against it all. The tools to do so just need to get more accessible (which they are) and consumer behavior needs to change (which it slowly is).

I feel like, as early adopters, we accept that there are challenges here to work out. I’m concerned that as we onboard friends and family, and they see despicable content in Introductions or Grownostr hashtags, they will abandon the protocol. Could clients use AI to filter images but be very transparent about it? Like movie ratings- the user selects if they want a G rated experience, PG, PG13, R or NC17. Maybe even require a passcode to change that (think accounts used by minors). Anyway, seems like there could be ways to approach it and different clients could taylor to their customers and let the market decide. For what it’s worth, when primal 2.0 came out and the firehose was an errant swipe away, I suggested making the firehose a little harder to get to and they responded. Imagine X or Meta responding to customer feedback that quickly (and I know it was likely others feedback too). Appreciate the thoughtful discussion all!

Really easily. Just focus on preventing random media from automatically being downloaded and displayed on clients. Text based notes should always be protected.

Luckily most of these tools are highly targeted. The use of them doesn't break nostr's user-centric nature either, so I don't think it will actually result in meaningful centralization