I don’t view this as a “social” network of friends or family members, to me, it’s a content network. I follow people because of the content they post.

I already indirectly “trust” this content because the people I follow can already repost to me.

So yes, I trust their choice of content because I follow them.

The system you’re proposing seems to have a few issues:

First, if a new user has 0 trust, they are not shown to anyone. If they were shown with a zero score then so would spammers.

Second, any bot with low trust can just create a new npub and return their trust score to 0 so that they can keep spamming.

Third, it’s not clear if your system has a way to avoid Sybil attacks where bots “trust” each other and that fake trust boosts their overall trust.

Reply to this note

Please Login to reply.

Discussion

You’re absolutely right—using a social network for filtering content is a simple and effective solution. You follow people because you like their posts, and in a way, that’s a form of indirect trust. Plus, the functionality’s already there in most cases, and I know clients like iris.to used to do it. It works for now, no doubt.

But here’s the thing: while it’s great for quickly filtering out bots and randoms, it’s not perfect. The follow system is more about content preference than actual trust. Just because you follow someone for their spicy memes doesn’t mean you’d trust them with, say, medical advice or fact-checking. And that’s where it falls short—it doesn’t allow for a reputation system or any sort of "fact-checking" style rating on posts. It’s like giving every post the same level of credibility just because you follow the person, even if it’s not all equal.

In a Web of Trust, new users (and bots) with zero reputation aren’t automatically filtered out—they're visible at first. But here’s the catch: bots will quickly be marked as untrustworthy by just a few people, and then they’ll be filtered for everyone else in the network. It’s like crowdsourced spam control—once a bot is flagged, it’s as good as invisible to the rest of us.

To add another layer of protection, relay servers could require proof of work before accepting posts from new accounts, enforce rules like rate limits on IP, etc., making it harder for them to endlessly spam.

As for bots that trust each other—well, that’s not really a problem. In a Web of Trust, it’s not about the size of the network; it’s about who you trust. So, if bots are busy trusting each other, it doesn’t affect your network unless someone in your network starts trusting them. And since no one in your network is likely to trust a bot, those fake trust loops don’t impact you at all.

“The follow system is more about content preference than actual trust. Just because you follow someone for their spicy memes doesn’t mean you’d trust them with, say, medical advice or fact-checking.”

Yes, I’m talking about content preference and not “trust” where people would rank their medical, legal or stock picking knowledge in a separate score.

I’m viewing nostr through the simple lens of spam vs not spam. Perhaps in the future a ranking of medical professionals would be useful and perhaps we’ll see that in another nostr app.