I imagine that depends on the threshold of feedback, varying by client? 🤷

In this case it was pretty overwhelming response to "thank me later" BS.

Reply to this note

Please Login to reply.

Discussion

Is this flagging thing a feature of the protocol or of Amethyst or whatever client?

If at the protocol level it would probably also be something that the relays could choose to filter out. Could give the user some neat choice that way: only subscribe to the G-rated relays or snowflake relays or whatever (useful for e.g. parental controls) or the client could apply a moderation strategy as a matter of policy.

You guys bring up great points worth considering. I have no good answers.

It's a cool feature worth being suspicious of. An opportunity for clients to dial in the experience or make it customizable.

Same here, no answers, just unqualified opinions as an armchair quarterback lol. But I look forward to seeing how this is used. imo content moderation is often necessary (thinking of CP, gore, extreme shit like that) but more often than not it’s insidious.

The real neat thing here is transparency. Moderation can be bad when it’s opaque but if the user knows exactly what they’re not seeing and why then they can change that if they want, which is super powerful.