Replying to Avatar brugeman

Starting to play with decentralized trust ranking in Spring v0.12.

You can estimate, adjust and publish trust scores for other users - these are estimated from your recent interactions.

https://void.cat/d/T5HriPK2C8QSd7cGsoJVL6.webp

nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 has been advocating the TrustNet as a web of trust implementation, useful for spam filtering etc.

The algorithm has two steps - first, each user publishes 'trust assignments' - that's trust scores your can now publish with Spring. These are published as 10629 replaceable events with a list of 'p' tags and a score, typical size will probably be ~100 pubkeys. We provide an estimate based on past interactions, but it can't be precise - you may and should adjust it to match your actual relationships.

The second step is that apps can download trust assignments of users close to your network (contacts, people you like/zap a lot etc) and run a calculation akin to PageRank, but it's not global - it's local to your network. The result will be several thousand pubkeys with non-zero trust ranks - a much wider network of users who could be trusted.

This way the trust ranking is a) based on everyone's actual relationships, because you can adjust the trust scores you're publishing, and b) efficient and can be used by any app - it just needs to download several hundred trust score lists and run the trustnet algo periodically and store results in local cache.

Spring only does step one at the moment. When enough people publish their trust assignments we will add the second step and let you calculate your own trust ranks. Spring will show the trust ranks under profiles, and will use it for spam filtering later. Other apps will probably find other uses for it.

More on TrustNet here: https://cblgh.org/trustnet/

FORGIVE MY RANT … but I see TrustNet as little more than a glorified popularity contest. A less qualitative (measurable and meaningful) ranking system may actually be more valuable.

Here are some random thoughts:

On qualifying relationships:

- people don’t want to rank the “quality” of their relationships. ITS HARD!!

- people especially don’t want to come back AGAIN to update these scores (or even add scores for new friends). It will NEVER happen on a regular basis!! (Srsly?)

- Qualitative scores reflecting the “depth” of ones relationship (acquaintance, friend, peer, partner) by definition need to be updated as one’s “perception” of the relationship changes.

- If keeping these qualitative scores updated is required for the success of this trust ranking system, IT WILL FAIL to hold value for its intended purpose.

On trust as a numeric scale :

- The descrete 100 value scale underpinning this “qualitative” score is not only ridiculous (nobody knows or cares about a 100 value scale) and meaningless (the increments as applied will still be arbitrary) it could also be detrimental for a trust ranking system.

- Trustworthiness is not a scalar value. Humans don’t have “more” or “less” trust for each other. We either “do” or “do not” trust each other in specific cases. Because of this, ranking on a scale is prone to misinterpretation.

- What translates well to a scale is popularity. “If my friends trust X (or if X has a bigger voice and reach) then I will give X a higher trust score.” Problem with this is that the value no longer represents individual trustworthiness.

On quantifiable measures of trust :

- Quantitative measures can be used to determine trust. They don’t ALL have to be algorithmically derived. A mix of Hand reported and computer generated data may work best.

- Digital identities may have “layers” of trust (distinct from physical interactions) that may be applied “each on their own” (in no particular order) to determine trustworthiness for specific interactions.

- One layer of digital trust may be verification of personhood. For some transactions, a real person is required.

- Another layer of digital trust may be verifying asset ownership. Is this the same entity that “owns” X, Y, or Z known digital assets?

- Another layer of digital trust may be verifying originality. Does this account pretend to be somebody else and if so is it obviously a spoof? (This may be accomplished best by actual humans)

- Other layers of trust may exist, may be discovered, and may be applicable for web of trust implementations. For this reason, any NIP developed should be open to expansion.

- Web of trust COULD be determined by discrete “flags” being applied (by humans and by algorithms) to a profile. Each verifies a specific known and measurable quantity. Together they “add up to” an overall “verified” or “trusted” visible mark (one or three different marks?) applied to profiles. TBD.

We really should be discussing this in earnest (openly, but in a dedicated format). Decentralized WOT implementation will NOT ONLY be a prime differentiator for Nostr from other socials, but will ALSO be essential for Nostr’s success as a social network that is NOT overrun by bots and bad actors.

Thanks. #rantover

Reply to this note

Please Login to reply.

Discussion

Uh nostr:npub1manlnflyzyjhgh970t8mmngrdytcp3jrmaa66u846ggg7t20cgqqvyn9tn, TrustNet is a decentralized subjective WoT system. The numbers only make sense from the perspective of one user towards the network of their contacts.

Thanks. With respect, I do understand. As it should be, trust is relative. Rankings in “my” web of trust will be different than rankings in yours. We can talk about how this should be implemented (I’d be honored to be included) but this doesn’t change my base arguments:

1: “quality of relationship” is HARD and (at best) will not be updated by people. Certainly not en mass.

2: “trust as a numeric scale” will likely NOT reflect an individuals “trustworthiness”, and may in fact be misleading if presented as such.

3: quantifiable (even if some are relative to each user) and non linear (discrete variables that stand on their own) measures of trust can be used to achieve our goal. They might be numerous (and some undefined as yet) but they can be “easily understood” and because of this can be “trusted” by everybody to mean what they promise to mean.

Forgive my random thoughts. Wd love to converse more formally in this topic. How Nostr implements WOT may in fact be its downfall or it’s saving grace. Thank you.

nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 nostr:npub1xdtducdnjerex88gkg2qk2atsdlqsyxqaag4h05jmcpyspqt30wscmntxy

Here is a simple idea that could incorporate TrustNet and other WOT filter implementations.

As per my rant above, I believe a “parent NIP” that defines an consistent UX and API (of sorts) for WOT filters (in clients) would be the best choice moving fwd. here’s a “top view” of how that could play out:

nostr:note1za8gapacw9l2r6eqxljpx478r8vgfsd3uxe3qkh7k424sc9nekssyevl7p

Great rant, thank you. I agree with most of the problems you're outlining.

First, on layers - I never suggested that we only need one single set of trust assignments. If we need several layers or facets - we could have more.

Second, on updating - manual or automated, any trust signal that is published will have to be updated periodically. UX might make it super smooth, but still.

Third, on discrete flags - instead of 0-100 scale you're suggesting 0-1 scale, what does that change in principle? If humans are limited - let them only publish 0 and 100, and leave the full range to machines.

Is there a preview of the NIP you're working on?