Now that is nice! I’d love to see those numbers.

But there’s more to consider in a real world “trusted” WoT implementation (at scale, on a network whose new user retention will entirely depend on them being “included” at day one into a “trusted” network) than a quantified difference between algo derived WoT lists and “human in charge” WoT lists.

Because WoT will likely end up first in line during new user orientation, and talked about constantly (Custom content filters are already a claim to fame for bluesky, and Nostr’s will be even better.) the tools for implementing WoT will need to also be front and center. Easy to understand, discover, and use.

IMHO, giving people an “is trusted” checkbox for their follows, and saying “this controls your web of trust” will be the ideal on-ramp for getting them to the understand that (client configurable) content filters are ALSO working behind the scenes suggesting their “trustworthy” sources.

What I’m pointing out is a UX flow for getting people to make use of WoT, so that nostr can survive the long haul. This is the “other thing to consider”.

Reply to this note

Please Login to reply.

Discussion

doesn't following someone automatically exclude them from being auto-muted by WoT?

“Wouldn’t” is the term, cause fiatjaf is only proposing code… and I don’t know how he (or you) wd implement, without a NIP as guidance … AND relays and client should be free to implement “opt in” filters in all different ways.

However, the more important point to make is that algos will never replace human judgement. They may suggest, and even stand in when asked, but they will always fall short.

So, in the end, it doesn’t matter what the filter rule “should do”, because filters “should” be transparent for end users to choose and configure.

Pick a different one, or none at all, to feed the content you desire.

you have a very special way of saying absolutely nothing with a lot of words

Sorry… god made me this way.