I shared this on the fediverse, bluesky, and twitter to get a sense of what folks thing from other communities. In particular I got feedback from one person who really knows the problem of content moderation, trust, and safety.
Yoel Roth, who ran twitter’s trust and safety team up until Elon Musk took over. He said
> “The taxonomy here is great. And broadly, this approach seems like the most viable option for the future of moderation (to me anyway). Feels like the missing bit is commercial: moderation has to get funded somewhere, and a B2C, paid service seems most likely to be user-focused. Make moderation the product, and get people used to paying for it.” - https://macaw.social/@yoyoel/110272952171641211
I think he’s right, we need to get the commercial model right for paying for the moderation work. It could be a crowd sourced thing, a company, a subscription service, etc… Lightning helps a ton here, because we can do easy fast cheap payments! Users can then choose which moderation service they want to use, or choose to not use any at all.
A group could even come together and setup a moderation service that was compliant with local laws. In the most extreme example, a company which provided a moderation service in China which was compliant with Chinese social media laws. If you installed a mobile app in china, it used that moderation service, locked in.
Another could be a church group which provided moderation services which met the cultural values of the group. That one doesn’t have the state, so it wouldn’t be locked for regions, but would still be very valuable to the users.
Perhaps there could be a nostr kids moderation service for users who are under 18?
Anyway, we need to find ways to fund and pay for these services. Especially since we can’t just take a cut of advertising revenue to cover the costs.
And as it’s open source and a permissionless network, what one app or user does isn’t imposed on others.