This is the way👇
We’re also going to need a way for new users to get into the trust graph because all new users look like spammers and will automatically be blocked. nostr:note1gaut89kukvy94l4kv0692twdzm3xqqnc7m9revp06xc9vx0h6c8sr52mnx
This is the way👇
We’re also going to need a way for new users to get into the trust graph because all new users look like spammers and will automatically be blocked. nostr:note1gaut89kukvy94l4kv0692twdzm3xqqnc7m9revp06xc9vx0h6c8sr52mnx
Using the social graph as a Web of Trust isn’t ideal because it mixes two different things. The social graph is about who you follow and who follows you—connections, not trust. Just because you follow someone doesn’t mean you trust them with important info, and vice versa.
In my opinion, keeping trust separate from the social graph makes more sense. It lets you build a trust network based on actual trustworthiness, not just social ties. This way, new users aren’t automatically flagged as spammers but can earn trust through their actions, not just who they’re connected to.
Separating these systems ensures spam control while giving newcomers a fair shot to prove themselves.
I don’t view this as a “social” network of friends or family members, to me, it’s a content network. I follow people because of the content they post.
I already indirectly “trust” this content because the people I follow can already repost to me.
So yes, I trust their choice of content because I follow them.
The system you’re proposing seems to have a few issues:
First, if a new user has 0 trust, they are not shown to anyone. If they were shown with a zero score then so would spammers.
Second, any bot with low trust can just create a new npub and return their trust score to 0 so that they can keep spamming.
Third, it’s not clear if your system has a way to avoid Sybil attacks where bots “trust” each other and that fake trust boosts their overall trust.
You’re absolutely right—using a social network for filtering content is a simple and effective solution. You follow people because you like their posts, and in a way, that’s a form of indirect trust. Plus, the functionality’s already there in most cases, and I know clients like iris.to used to do it. It works for now, no doubt.
But here’s the thing: while it’s great for quickly filtering out bots and randoms, it’s not perfect. The follow system is more about content preference than actual trust. Just because you follow someone for their spicy memes doesn’t mean you’d trust them with, say, medical advice or fact-checking. And that’s where it falls short—it doesn’t allow for a reputation system or any sort of "fact-checking" style rating on posts. It’s like giving every post the same level of credibility just because you follow the person, even if it’s not all equal.
In a Web of Trust, new users (and bots) with zero reputation aren’t automatically filtered out—they're visible at first. But here’s the catch: bots will quickly be marked as untrustworthy by just a few people, and then they’ll be filtered for everyone else in the network. It’s like crowdsourced spam control—once a bot is flagged, it’s as good as invisible to the rest of us.
To add another layer of protection, relay servers could require proof of work before accepting posts from new accounts, enforce rules like rate limits on IP, etc., making it harder for them to endlessly spam.
As for bots that trust each other—well, that’s not really a problem. In a Web of Trust, it’s not about the size of the network; it’s about who you trust. So, if bots are busy trusting each other, it doesn’t affect your network unless someone in your network starts trusting them. And since no one in your network is likely to trust a bot, those fake trust loops don’t impact you at all.
“The follow system is more about content preference than actual trust. Just because you follow someone for their spicy memes doesn’t mean you’d trust them with, say, medical advice or fact-checking.”
Yes, I’m talking about content preference and not “trust” where people would rank their medical, legal or stock picking knowledge in a separate score.
I’m viewing nostr through the simple lens of spam vs not spam. Perhaps in the future a ranking of medical professionals would be useful and perhaps we’ll see that in another nostr app.
they must start by just posting to the pow relay until they have made enough friends
Yes, we’ll need some significant cost for new users (which could be proof of bitcoin paid or owned, or proof of work in creating an account or for posting a message). — or maybe some or all of the above for extra options.