Moderation as a social exercise in Nostr only works because clients support whatever "social moderation specification" exists in code.

Clients can always fully ignore humans moderating each other. Or they can use that social information in their favor. Either way, the code of the client is what decides if moderation is active or not and how much of it is being run. The social layer can only provide information to the code.

You can verify how much each client complies with your moderation expectations by reviewing its code. That was my point regarding "proof in code"

Reply to this note

Please Login to reply.

Discussion

I see what you’re saying 🤝🏻