Instead of asking you to elaborate, I asked nostr:npub16g4umvwj2pduqc8kt2rv6heq2vhvtulyrsr2a20d4suldwnkl4hquekv4h "Deep Research":
nostr:naddr1qvzqqqr4gupzq3huhccxt6h34eupz3jeynjgjgek8lel2f4adaea0svyk94a3njdqqxnzde5xgmr2dpcxscnyve563jzel
My input was:
Nostr is highly vulnerable to Sybil attacks. How could we fix that? What are existing attempts? Consider "Anonymous usage tokens from curve trees or autct", zaps, wot, nip5, ...
Nostr is above all "censorship resistance" and thus requires privacy preserving mechanisms.
Compare the trade-offs. nip5 in my estimate is a very weak form of identity that can be used in Sybil attacks, too. Zaps add cost but we currently have no good proof of expenditure, so attackers might even be net receivers of zaps. We have follows but they don't imply trust, so the "wot" applications so far are really just webs of association with no attestations to being actual humans.
My main focus is feasibility from a user perspective. PGP "failed" on a broader scale because it is too cumbersome.
Explain what it would take to protect all users against Sybil attacks and how individuals could protect themselves from following bots.
ease-of-use and resistance to Sybil attacks are important.
Explore all you can find. Behavioral analysis will be a cat-and-mouse game where bots get better at posting like humans but surely some bots can be identified like that.
So I guess I still have to read up on aut-ct as this "report" didn't help me much in terms of providing context.
Oh, that's interesting. First, apologies for not backing up my casual assertion that aut-ct etc. is me trying to "do" anti-Sybil without any details, but I'm afraid of having made so many posts about it in the past, I don't want to bore people. So overall: I think this question (how to defend against Sybil in (1) open (2) decentralized (3) private networks is just an extremely hard and deep question generally. For example, you say you see nip5 as weak and, I don't know, it latches onto an existing system which is quite "strong" but very expensive and it's also not private. While a WoT or other trust network solution might have similar tradeoffs (costly but *could* be strong) but could also just fail horribly dependent on algorithm. Meanwhile cost based solutions might be strong but .. uhh.. costly, lol. And "anonymous tokens from scarcity of utxos" as I'm suggesting is just generally weak, but could prevent bursty attacks on a system. So I don't know really. Also to be considered is pure proof of work, which has other problems/tradeoffs.
Thinking more about Sybil resistance - Nostr's follow-centric design already pushes moderation to the periphery. Instead of universal barriers, what if clients incorporated optional bot-detection signals while preserving user choice? Economic or cryptographic barriers alone risk creating approved-bot markets while excluding legitimate users. The goal isn't preventing fake accounts but reducing their impact without social capital.
I thought about building a service that tells people about questionable follows. "You are following Alice. Consider following AliceNew. Alice shared here that the keys leaked and asked to follow AliceNew" or "Are you sure you want to follow Bob. Many report this account as an impersonator of this other Bob".
I guess I was more thinking about it at the infrastructure level, i.e. how relays can manage resource usage at larger scales. The problem of distinguishing "real" users from fake, or bots, is different. Using WoT type solutions for that, as you suggest, is reasonable I guess but there may never be any perfect solution there. Well it's a long discussion.
Thread collapsed
Thread collapsed
Thread collapsed