Let’s imagine we could "rate" as many interactions between one another here on Nostr. You can rate interactions as positive or negative, but you can’t know how the other party rated you until you also rate them, this keeps them blind so as not to interfere whoever rates last. The only thing you’ll know is that they rated you. Once both parties rate each other the rating becomes public.

Reply to this note

Please Login to reply.

Discussion

Yes, there are various ways to do ratings and cluster networks with webs of trust and things like that, and hopefully clever solutions are implemented -- however, I wonder if there might be Sybil attack problems arising from "network only" solutions. How do you stop fake reviews from a bot net and spoofed interactions?

My concept is that the most robust Sybil resistance would arise through costly proof that an interaction did in fact occur. In an economic transaction, proof of economic interaction might be possible by having a "reputation rake" paid to a 3rd party relay.

Then reputation can then be weighted, and it's expected future value to the economic actor minus value of tarnished identity if they cheat can be computed, enabling a new counterparty to trust this actor up to some threshold (with safety margin). Clients would automate all these computations.