Some of the people who've seen what #[0] & I have been proposing in "NIP-69" seem to think the objective is censorship. So to day I sat down and wrote out the bigger "vision" of where I'd like to see content moderation go on Nostr. Feel free to give it a read:

https://s3x.social/nostr-content-moderation

Just realize it's a first draft and needs work. But the point I hope I get across is that I want to see something that's individual and "bottom up". To me censorship is always top down since at the core of censorship is some authority flexing their power and enforcing their idea of what's good and bad - overriding your idea of good and bad.

Instead I want to see a cacophony of voices with individuals choosing which voices in that chaos they want to listen to for filtering their feeds. (Or they could choose to listen to none and see it all.)

But systems have to be put in place to make that a reality. It won't happen by accident.

And yes, the government will always force a certain level of censorship on us. But there are ways around that. For example our relay can't have anything related to escorting on it thanks to FOSTA/SESTA (horrible law), but people who need to do posts related to escorting could use #[1]'s relay. And that's the whole point with Nostr - it's censorship-resistant, not censorship-proof. Nothing is censorship-proof…

Reply to this note

Please Login to reply.

Discussion

I shared this on the fediverse, bluesky, and twitter to get a sense of what folks thing from other communities. In particular I got feedback from one person who really knows the problem of content moderation, trust, and safety.

Yoel Roth, who ran twitter’s trust and safety team up until Elon Musk took over. He said

> “The taxonomy here is great. And broadly, this approach seems like the most viable option for the future of moderation (to me anyway). Feels like the missing bit is commercial: moderation has to get funded somewhere, and a B2C, paid service seems most likely to be user-focused. Make moderation the product, and get people used to paying for it.” - https://macaw.social/@yoyoel/110272952171641211

I think he’s right, we need to get the commercial model right for paying for the moderation work. It could be a crowd sourced thing, a company, a subscription service, etc… Lightning helps a ton here, because we can do easy fast cheap payments! Users can then choose which moderation service they want to use, or choose to not use any at all.

A group could even come together and setup a moderation service that was compliant with local laws. In the most extreme example, a company which provided a moderation service in China which was compliant with Chinese social media laws. If you installed a mobile app in china, it used that moderation service, locked in.

Another could be a church group which provided moderation services which met the cultural values of the group. That one doesn’t have the state, so it wouldn’t be locked for regions, but would still be very valuable to the users.

Perhaps there could be a nostr kids moderation service for users who are under 18?

Anyway, we need to find ways to fund and pay for these services. Especially since we can’t just take a cut of advertising revenue to cover the costs.

And as it’s open source and a permissionless network, what one app or user does isn’t imposed on others.

I see a huge role for NGOs in moderating Nostr. I can see a lot of organizations wanting a role if Nostr really becomes a leading social media platform. SLPC, ASACP, FSC, ACLU to name a few on the liberal side, but also their conservative counterparts. They could do fund raising to give them the money to have a voice.

Where I see corporate involvement is in “AI-ish” bots that do a “decent” job detecting certain types of content. Those won’t be perfect but they’ll cover a lot of ground quickly.

In related news… One of our challenges, which I discovered today is that nostream (and presumably other relays) won’t accept Kind 1984 events from non-paid users even though non-paid users can read from the relay. That needs to be fixed ASAP - it’s a huge legal problem for the relay operators. I would go as far as saying it should be added to NIP-56 — “paid relay operators must accept Kind 1984 events from any user if it relates to content in their relay”.

#[4]

Lmao

Lol

I think this is very well thought through. I would put more emphasis on trust lists than nomenclature, since the former protects users, and the latter protects service prividers, but I think you're right that both are needed.

I will def spend some time understanding your proposal and where you are coming from. I am extremely skeptic otherwise and I admit it’s not good on my part.

Is NIP-69 a precursor to another proposal you have in mind as well?

Nice write-up. I really appreciate your leadership in this area. I think a lot of Bitcoiners are used to skirting KYC laws and think that Nostr can do the same when it comes to moderating content. Ignore the haters, I think many of them are just naive. Stories like the one #[1]​ gave in his Nostrica talk convert people quickly.

When it comes to responding to moderator reports I would love to see a really decentralized system like TrustNet be applied at the client level in addition to the necessary work at the relay level. After seeing so many Mastodon moderators burn out, and seeing how many people Big Social has working on moderation I think we need a paradigm shift if we really want to scale a decentralized social system. The Secure Scuttlebutt idea of using your peers as an immune system against bad content is revolutionary. We’ve brought Scuttlebutt’s two-hops algorithm into Nos, but something like TrustNet takes it to the next level. https://cblgh.org/trustnet/

That’s a great article! The same basic idea I had, but fully fleshed out. The only thing I wish were in there is the idea of negative trust (e.g. “Do the opposite of whatever Tucker Carlson says to do”).