Avatar
david
e5272de914bd301755c439b88e6959a43c9d2664831f093c51e9c799a16a102f
neurologist and freedom tech maxi Co-founder @ NosFabrica 🍇 Grapevine, đŸ§ âšĄïžBrainstorm

You sort and filter based on numbers which you’re using as a proxy for trust. The act of calculating and using personalized PageRank means you are ranking.

They can, but they shouldn’t, as I argue in my article. Why not? Main reason: portability.

nostr:naddr1qq08xetsv9exzarfdahz6mmx9468yatnwskkzmny943kc6t9de6qyg89yuk7j99axqt4t3pehz8xjkdy8jwjveyrruync50fc7v6z6ss9upsgqqqw4rsrjey5u

For starters, the nostr ecosystem needs WoT Service Providers that calculate WoT scores that are *personalized* and *portable*. I’ve written about this in several of my long form posts over the past few months. Developing this ecosystem is the main thrust of the WoT hackathon at nostr:npub1healthsx3swcgtknff7zwpg8aj2q7h49zecul5rz490f6z2zp59qnfvp8p, aka our #wotathon that goes through April. Importantly, these WoT Service Providers must be open source, which means you’ll have the option to do all the calculations yourself, as your own SP. Just like you can run your own BTC node and nostr relay if you so desire.

https://nosfabrica.com/wotathon

Do you consider mute lists to be “censorship”? I don’t, and I’m guessing you don’t either. In which case: why do you consider the WoT scores on Amethyst to fit into the censorship category?

I’d argue it’s censorship if it’s centralized, based on global trust scores. And yes, Bluesky is centralized. But if the system is personalized, like a mute list that you manage yourself or the metrics on Amethyst that are personalized to the end user, then no, it’s not censorship.

I should probably point out that the scores on amethyst are calculated using open source software that you can run yourself. You can delegate the calculations to someone else if you wish, but it’s not necessary.

Google’s PageRank was magic. I remember when it first came out. Keyword search went from useless to amazing overnight.

But Google always calculated *global* PageRank. Meaning the PR scores were as seen by Google. The web: as viewed, as filtered, as scored, as judged —- by Google. The Freedom Tech way is for the algos and scores and point of view to be personalized to the end user. Be your own Google, so to speak.

Have you joined any of our weekly #wotathon community calls?

Some people think bitcoin is evil because “money is the root of all evil.” But to them, money means fiat. They don’t realize that the evils of fiat should not be assumed to apply to decentralized money.

Same thing with reputation. Centralized scoring systems are very Black Mirror. But decentralized scoring systems, where scores are personalized rather than global, belong in a different category.

We’re live!

nostr:note1nfkajtfufkqwhlsrw3luxu60xt9lj4zfmupeuq3tp5lpwlecd6uql8e97z

It should be fixed when we start broadcasting đŸ€žđŸ»

#wotathon community call in one hour!!

nostr:note1nfkajtfufkqwhlsrw3luxu60xt9lj4zfmupeuq3tp5lpwlecd6uql8e97z

GM all good people!â˜•ïžđŸŒžđŸ«‚

nostr:note1nfkajtfufkqwhlsrw3luxu60xt9lj4zfmupeuq3tp5lpwlecd6uql8e97z

Replying to Avatar Max

In 1992, Phil Zimmermann added a feature to PGP version 2.0 that was supposed to solve one of cryptography's hardest problems. He called it the "web of trust." The idea was elegant: instead of relying on certificate authorities to verify that public keys belonged to their claimed owners, users would vouch for each other. You sign my key, I sign yours, and through chains of these signatures, strangers could eventually trust each other's identities.

The vision was decentralized. It was also a complete failure.

Thirty years later, the PGP keyserver network is dead. GnuPG disabled web of trust functionality by default after spam attacks made keys unusable. The dream of cryptographic trust without central authorities died not because the math was wrong, but because the design asked too much of humans.

Nostr has quietly built what PGP could not. Its web of trust works precisely because users never have to think about it.

## The Ceremony Problem

PGP's web of trust required users to perform explicit trust rituals. You would attend a "key signing party," verify someone's identity through government documents, sign their key with your private key, then upload that signature to a keyserver. Back home, you would configure your keyring, assigning trust levels (unknown, marginal, full, ultimate) to various keys. The software would then calculate which keys were "valid" based on weighted combinations of trusted signatures.

This was not a workflow that scaled beyond cryptography enthusiasts. Tim Berners-Lee, reflecting on why PGP never achieved mass adoption, noted the UX failures: dialog boxes telling users to "do X" with no button to do X, multi-step processes to download and sign keys without explaining what any of it meant, the general sense that using encryption required joining a secret priesthood.

But the deeper problem was structural. Most users believed the web of trust worked like "six degrees of separation," where trust would propagate through long chains of connections. It did not. As Hal Finney explained in 1994, "You can only communicate securely with people who are at most two hops away in the web of connections." You could trust keys signed by people you personally knew. That was it.

By 2019, the keyserver infrastructure was collapsing. Malicious actors discovered they could flood popular keys with thousands of garbage signatures, causing GnuPG to crash when importing them. The SKS keyserver network, which had synchronized keys globally since the early 2000s, shut down entirely in 2021 after operators couldn't process GDPR deletion requests for a system designed to be append-only.

The explicit trust model created a bureaucracy. Bureaucracies don't survive contact with spam.

## Trust as Byproduct

Nostr takes the opposite approach. Instead of asking users to perform trust ceremonies, it extracts trust signals from actions they already take.

When you follow someone on Nostr, you publish a kind 3 event listing every pubkey you follow. This is not a security ritual; it is the normal behavior of using a social network. But that follow list, signed by your key, is now a cryptographic attestation. You are implicitly saying: these are the people whose content I want to see, whose judgment I find valuable enough to include in my feed.

When you mute someone, that too becomes a signed event. A warning to anyone who shares your sensibilities.

When you zap someone,, you attach an economic cost to your endorsement. Fake accounts are cheap; sats are not.

The Nostr protocol did not invent these actions. Follows, mutes, and tips existed on centralized platforms for years. What Nostr did was make them cryptographically signed, publicly attestable, and aggregatable into trust scores. The same behaviors that made Twitter addictive now make Nostr's web of trust function.

This is the design insight that eluded PGP: trust should be a byproduct of normal activity, not a separate task requiring special knowledge. The cypherpunk who wants encrypted communication and the normie who just wants to shitpost both produce useful trust signals by doing what they were going to do anyway.

## Computing Trust from the Social Graph

Raw follow lists and zap receipts are data. Turning them into usable trust scores requires computation.

The dominant approach borrows from Google's original insight. PageRank, the algorithm that made web search work, solved a similar problem: determining which pages were important based on link structure. A page linked by many important pages was itself important. The algorithm was resistant to spam because creating fake pages that linked to you didn't help unless those fake pages were themselves linked by real pages.

Personalized PageRank adapts this for social trust. Instead of computing a single global importance score, it computes importance relative to a specific user's position in the graph. If you want to know how much to trust some pubkey you've never seen, the algorithm simulates random walks through the follow graph starting from your account. The more often those walks land on that pubkey, the more connected they are to people you already trust.

This is what Nostr.Band does when filtering search results. It seeds initial trust to accounts with verified NIP-05 identities, then lets PageRank propagate through the network. "If initial weight is given to a spammer by some accident," their documentation explains, "they are most likely losing it all by the end of the calculation, because almost no one interacts with their content."

Coracle, the client built by hodlbod, implements a simpler version directly: your WoT score for someone equals how many people you follow who also follow them, penalized by how many people you follow who have muted them. Crude but effective.

## Vertex and npub.world

For developers who don't want to build graph analysis infrastructure, Vertex offers web of trust as a service. Their system crawls Nostr follow lists continuously, computes Monte Carlo PageRank scores, and exposes them through a DVM (data vending machine) interface. Query with a source pubkey and a target pubkey; get back a personalized trust score, follower counts, and the target's highest-ranked followers.

The companion tool npub.world provides a search interface for finding profiles within the Nostr network, leveraging the same trust infrastructure.

Vertex explicitly rejected the emerging NIP-85 standard for "trusted assertions," which takes a different architectural approach. Under NIP-85, service providers publish kind 30382 events that make claims about entities. The `d` tag identifies the subject (typically a pubkey), and additional tags carry the assertions: a `rank` score, follower counts, zap totals, or any other metric the provider computes. These events sit on relays like any other Nostr data, and clients can subscribe to assertions from providers they trust.

The model has appeal. It keeps everything in Nostr's event system. Users choose which assertion providers to trust, similar to choosing which relays to use. A client could subscribe to assertions from three different WoT services and weight them according to user preferences. The data is cacheable, auditable, and portable.

But Vertex identified a fundamental limitation: NIP-85 assertions are computed for a generic audience, not personalized to the querying user. If you ask "how trustworthy is pubkey X," the answer depends on who is asking. Your social graph differs from mine; your trust scores should differ too. Pre-published assertions cannot capture this. They answer "how trustworthy is X according to service provider Y" rather than "how trustworthy is X from my perspective."

The deeper problem is discovery. Static assertions require you to already know the pubkey you want to evaluate. But web of trust should help you find trustworthy accounts you don't yet know about. "Who should I follow?" is a harder question than "should I trust this specific person?" Real-time personalized computation, responding to the specific user asking the question, enables recommendations that static assertions cannot.

This is an ongoing debate. The WoT-a-thon hackathon running through April 2026 is pushing for NIP-85 adoption, with a dedicated prize track for implementations. Different approaches will compete, and the protocol will evolve. The tension between pre-computed portability and real-time personalization may not have a single correct answer.

## What Remains

Nostr's web of trust is not a solved problem. New users face a cold-start problem: without history, they have no trust scores, making it hard to break into existing networks. The computation itself, while based on decentralized data, currently runs on centralized services like Vertex and Nostr.Band. Public follow lists, which make WoT possible, also leak social graph information to anyone watching relay traffic.

But the fundamental architecture is sound. Trust signals emerge from normal behavior. Algorithms convert those signals into personalized scores. The user never has to attend a key signing party.

Zimmermann's 1992 vision was right about the goal: decentralized trust without certificate authorities. He was wrong about the method: asking users to do extra work. Nostr's contribution is recognizing that the work was always happening. It just needed to be captured.

---

**Artwork Suggestion:** "The Syndics of the Drapers' Guild" by Rembrandt van Rijn (1662). Guild officials whose role was verifying cloth quality through reputation and repeated honest dealing. Trust built through commerce and mutual accountability within a network, not through central certification.

Nice article. You’re addressing the exact same question I address in this article, which is: where does the trust signal come from? Zimmerman’s approach failed because it relied exclusively on what I call *explicit attestations of trust* — in the case of PGP, key signing ceremonies. As you point out, nostr adds the missing ingredient: follows, mutes, zaps, reactions, replies, and other examples of ordinary social behaviour, what I call *proxy indicators of trust*.

In my article I argue that we need both. And if you follow the line of thinking all the way through to the end, you arrive at the notion of *interpretation* of social signals and an algo like GrapeRank that knows what to do with interpreted data.

nostr:naddr1qqn8wetz94hkvtt5wf6hxapdwa5x2un9945hxtt5dpjj6arjw4ehgttnd9nkuctvqyt8wumn8ghj7un9d3shjtnswf5k6ctv9ehx2aqzyrjjwt0fzj7nq964csum3rnftxjre8fxvjp37zfu285u0xdpdggz7qcyqqq823cralpgz

Ideally, every nostr user would run open source software locally that calculates personalized trust metrics. This is why brainstorm is open source: so you can run your own instance and calculate your own personalized trust metrics. No need to trust a third party if you’re willing to put in a little effort.

Realistically, not everyone is a developer, and most users will want someone else to do it for them. Just like most nostr users don’t run personalized relays, and most Bitcoin users don’t run their own full nodes. It doesn’t mean the entire endeavor is a failure. A small fraction of people who actually run their own Bitcoin nodes, with almost everyone having the ability to do so (not just theoretical but practical), is better than the status quo where basically 0% of people have the ability to audit the fiat system, whether in theory or in practice.

So how do we maximize the number of people who actually do calculate their own personalized trust metrics? Answer: we will have to make it as easy and user friendly as possible. And did I say open source? It needs to be open source. The lower the barrier to entry, the healthier the ecosystem.

Also: nostr:nprofile1qqs8a474cw4lqmapcq8hr7res4nknar2ey34fsffk0k42cjsdyn7yqqpz9mhxue69uhkummnw3ezuamfdejj7qgwwaehxw309ahx7uewd3hkctce3f453 is working on a neo4j nostr relay in go and has put a lot of thought into overall graph db relays architecture. I think graph db relays could form their own special class of relay and it could be fruitful for a gunDB relay dev and neo4j relay dev to bounce ideas off each other.

Sounds like there will be lots of instances where a WoT Service Provider would want to deliver a bloom filter instead of a big list.

Big lists cause several problems:

1. Unwieldy to transmit by API; even just a slight delay could result in bad UX, depending on the use case

2. Won’t fit in a single event due to size limits

3. Slows down processing when using the list for whatever the recipient is using it for.

Any rule of thumb estimates we should keep in the back of our minds as to how big a list of pubkeys or event ids should be before we should think about delivering a bloom filter instead?

Replying to Avatar Vitor Pamplona

I don't think so. Everybody just saves a huge list of relays in their databases.

There are many places clients could share bloom filters. This all started with this idea: https://github.com/nostr-protocol/nips/pull/1497

In this case, I proposed sha256 as a hash function so that clients didn't need to code MurMur3, but MurMur is so easy that we can just teach people how to do it.

I’m reading your NIP-76. It only takes 100 bits to handle 10 million keys without any false positives?? Wow. Very cool đŸ€Ż

Have you actually implemented this somewhere? (Production or testing) I’m curious to know what use cases we might expect to see in the wild in the short term if a bloom filter nip were to exist.

The kind 9998 list header declaration could specify the hashing algo. Or we could leave the hashing algo unspecified and recognize that it is not necessary for all clients to support all hashing algos, just like it’s not necessary to support all NIPs. Probably the community will gravitate to one algo organically, unless some devs have strong preferences that are not always aligned.

If getting everyone to agree to all the details is trivial, is there any reason not to go ahead and write up a bloom filter NIP?

Seems like convincing everyone in nostr to use the same exact specs would be a challenge. What if we come up with a system that doesn’t require everyone to use the same specs?

We declare a Decentralized List (kind 9998 per the custom NIP, linked below), called “Bloom Filter Specs”, and list the requisite parameters as “required” tags (rounds, salt, etc). So if you want to use some particular bloom filter, you declare an item on that list (a kind 9999 event) with your choice of specs and then refer to that event wherever necessary.

https://nostrhub.io/naddr1qvzqqqrcvypzpef89h53f0fsza2ugwdc3e54nfpun5nxfqclpy79r6w8nxsk5yp0qyt8wumn8ghj7un9d3shjtnswf5k6ctv9ehx2aqqzdjx2cm9de68yctvd9ax2epdd35hxarnwrn9hx

nostr:npub1g53mukxnjkcmr94fhryzkqutdz2ukq4ks0gvy5af25rgmwsl4ngq43drvk you were working with bloom filters at one point right? Are you using them in Iris? Any thoughts on this discussion?

I vote to give more power to the end user.

If I see Alice has N “verified” followers on one client, I want to see the same N on all clients — otherwise I have no idea what to do with the number. Which means the metrics like verified followers are personalized to me, calculated using the method that I select, and they follow me wherever I go on nostr. Which means less power to the clients.

All you need is someone to create a relay focused on the country or topic you’re interested in, then you add that relay to your relay list and you use it on any client.

Or a client could maintain a list of available country-specific relays, have a button for each one, you click your country of interest and the client knows which relay to point to. You could basically channel surf.

The more I think about it, the more I think these features of nostr:nprofile1qqsykd7klhautcjugml3jewuhtlw8zw04dl54tcdmw5m5vggk37ax3qpzfmhxue69uhkummnw3eryvfwvdhk6tcpr9mhxue69uhhxatswphhyapwdehhxarjxyhxxmmd9usvdssu are awesome and underutilized.

Relay.tools currently enables you to create a hobby-specific feed by keying on hobby-specific hashtags and using the WOA feature to keep out spam. By the time nostr:npub1healthsx3swcgtknff7zwpg8aj2q7h49zecul5rz490f6z2zp59qnfvp8p's WoT hackathon is over, you’ll be able to add a list of hobby-specific authors curated in real time by your grapevine. How does that sound?