Lot of people are starting to talk about building a web-of-trust and how nostr can or is already being used as such

We all know about using the kind:3 following lists as a simple WoT that can be used to filter out spam. but as we all know it does not really signal "trust", its mostly just "I find your content interesting"

But what about real "trust"... well its kind of multi-denominational, I could trust that your a good developer or a good journalist but still not trust you enough to invite you over to my house.

There are some interesting and clever solutions proposed for quantifying "trust" in a digital sense but I'm not going to get into that here. I want to talk about something that I have not see anyone discuss yet.

How is the web-of-trust maintained? or more precisely how do you expect users to update the digital representation of the "trust" of other users?

Its all well and good to think of how a user would create that "trust" of another user when discovering them for the first time. They would click the "follow" button, or maybe even rate them on a few topics with a 1/5 star system

But how will a user remove that trust? how will they update it if things change and they trust them less?

If our goal is to model "trust" in a digital sense then we NEED a way for the data to stay up-to-date and as accurate as possible. otherwise whats the use?

If we don't have a friction-less way to update or remove the digital representation of "trust" then we will end up with a WoT that continuously grows and everyone is rated 10/10

In the case of nostr kind:3 following lists. its pretty easy to see how these would get updated. If someone posts something I dislike or I notice I'm getting board of their content. then I just unfollow them.

An important part here is that I'm not thinking "I should update my trust score of this user" but instead "I'm no longer interested, I don't want to see this anymore"

But that is probably the easiest "trust" to update. because most of us on social media spend some time curating our feed and we are used to doing it.

But what about the more obscure "trust" scores? whats the regular mechanism by which a user would update the "honestly" score of another user?

In the real world its easy, when I stop trusting someone I simply stop associating with them. there isn't any button or switch I need to update. I simply don't talk to them anymore, its friction-less

But in the digital realm I would have to remove or update that trust. in other words its an action I need to take instead of an action I'm not doing. and actions take energy.

So how do we reflect something in the digital world that takes no-energy and is almost subconscious in the real world?

TLDR; webs-of-trust are not just about scoring other users once. you must keep the score up-to-date

Reply to this note

Please Login to reply.

Discussion

Yeah, I agree, which is why I always kind of drag my feet in these discussions when people talk about going past follows for WoT. I think it's a good idea, but could fail if not properly "gamified" I guess you could say.

One approach that could help is to use non-replaceable events (since an easy assumption to make about those is that they're always up to date), but individual attestations, which could be weighted less heavily over time.

I think one easy version is just to have "followed by" to at least confirm the account is likely the legit one. I used this all the time on Twitter to verify if the account that followed me was legit, it never was.

People will not maintain complex (or even simple!) trust scores, but they will maintain their following list and they will mute people. All we will ever have (IMHO) is that.

Trust is indeed much more complex. I may trust you to show up at the pub, but not to hold my beer. And because of it's complex nature, trust is not transitive or delegatable. But we're never going to be able to maintain representations of this.

So I think that means I agree with you.

We also have likes, replies and that kind of stuff.

Yes, my oversight. We could use them too, they don't add extra friction.

Have a list of keys in order of trust priority, let list be user-edited, let users manually select a key to trust or an alternate priority list for a given task

For example, imagine a nostr wiki where the first person you ever followed can override all edits from anyone else because you've never edited your priority list and your nostr client defaults to chronological order. However, while looking at an article about a certain crypto, you could manually select a dev for that coin or another source to compare their edits. The wiki could also keep a list of which keys are given special priority for each page the most often so you could have the wiki in a democracy-based mode.