Replying to Avatar david

Long term solution will require a general framework for the decentralized curation of knowledge, which is what I’m working on. (To that end, I’ll probably be issuing some bounties soon. Know any devs who want to stack a few sats? Send them my way! 😊)

Short term: if I ran one of the popular nostr clients here’s what I would do. It’s imperfect but quick, easy and probably pretty effective.

Assume that if Alice follows Bob then she probably kinda sorta trusts his judgment when it comes to muting scammers. (Not necessarily true, hence the “imperfect” qualifier. But probably good enough for a quick fix.)

Periodically comb through the mute lists of all your follows + their follows.

F1 = number of users one hop away, and F2 = number of users two hops away.

If Charlie shows up on someone’s mute list, then C1 = the number of times Charlie is muted by an F1, and C2 = number of times Charlie is muted by an F2.

In settings, have a button which when activated will mute anyone if either of the following conditions hold:

If C1 > 3 AND C1/F1 > 3%

or

If C2 > 5 AND C2/F2 > 5%

Something like that. Play with the numbers to find the right balance. Don’t want innocent people to get muted by accidental fat fingers. The reason for the percentages is that you want to make a system that works for everyone, whether you follow 10 ppl or 10,000. You could make the 3 and 5 adjustable parameters, although maybe that’s too much complexity for the user?

Honestly I don’t know why Twitter never did this.

I don't think muting scammers makes sense.

1. Scammers will just spin up new nostr accounts.

2. You will end up with tens of thousands of muted pubkeys which relays have to store and clients have to handle. Not scalable.

3. Mute lists have low information density. Most of the muted pubkeys will be inactive because of 1.

I would do a trust based model in which a trust score is determined by your and your friends following lists and positive reactions to notes. If the trust score of a user is below a certain threshold the client will not display his content or show it last. The data is already out there, clients just have to use it.

Next release of Nozzle will do it like this. It's a lightweight client which should work on shitty devices so I'm only doing a very basic version of it. I can get more fancy once I port the client to a desktop app.

Reply to this note

Please Login to reply.

Discussion

It’s a good point about scalability.

And I agree with calculating trust scores. Trust ultimately isn’t binary: Alice may trust Bob to maintain a list, but she may trust Charlie more than Bob.

But we have to keep in mind the point nostr:npub1t0nyg64g5vwprva52wlcmt7fkdr07v5dr7s35raq9g0xgc0k4xcsedjgqv makes and that I wrote about[1], which is that follow != trust.

Also, there are an infinite number of types of trust. Alice may trust Bob to maintain a bots list but not to maintain some other list. Trust is contextual.

And trust scores (as well as other types of scores) need a “confidence” component. Alice may think Charlie is 5X smarter than Bob in some given context, but her assessment may be based on scant data (low confidence) or it may be based on lots of data (high confidence). This is how Curated Lists currently works [2].

And to address the problem that bots cost (basically) zero: the default trust score for unvetted accounts should be an adjustable parameter. If sybil attacks are a problem, set the default trust score to zero. If they’re not, adjust the default score accordingly. Curated Lists currently has this as an adjustable parameter in the control panel [3].

[1] https://github.com/wds4/DCoSL/blob/main/dips/coreProtocol/02.md

[2] https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/exampleListCurationGrapevine.md

[3] https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/screenshots.md

Regarding your point that scammers will just spin up more accounts:

In a world with potentially unlimited swarms of scambots, perhaps we’ll need a system where we simply ignore all unvetted users. But of course we don’t want the isolated user who’s a real person to be left out in the cold. So build multiple methods to break into the system:

- pay some fee; you get to decide how much is enough to get onto your feed

OR

- social vetting: one or more trusted users attests: “this account is a real person, I know bc we communicated in meat space.” This feeds into a score, and you decide what threshold score is enough to break into your feed.

These discussions always end up raising more and more questions, producing solutions of more and more complexity, and always leave people wondering: where does the complexity end?

Do we all have to use the same system?

And the kicker: Who decides????

And my mind always comes back to Loose Consensus. This is the thing we need to understand and to build. Without it, we are nothing.

https://github.com/wds4/DCoSL/blob/main/glossary/looseConsensus.md