Makes total sense, nostr:nprofile1qqsyfhqu9kuu877hhm5j2lkwk5478nuvgza00d3lgmjkkk9px8r57zcprfmhxue69uhkvun9v4kxz7fwwdhhvcnfwshxsmmnwshszxmhwden5te0w35x2en0wfjhxapwdehhxarjxyhxxmmd9uqsuamnwvaz7tmwdaejumr0dshszy0a9p. Thanks for replying to me and for the clarifications.
I honestly think IP logging is unavoidable. For example, think of the "EmojiGuy" attack, which bypassed spam filters. I know that we can always build increasingly sophisticated spam filters, but It's a game of cat and mouse — bad actors will find ways around even the most advanced systems.
EmojiGuy wasn’t even rotating IPs, only keys, and still managed to create chaos on multiple relays. Now imagine "EmojiGuy 2.0" using IPv6, rotating IPs over a /48 or /64 subnet. Then consider "EmojiGuy 3.0," spamming from a gazillion different IPv4 and IPv6 addresses. We’ll need a quick way to identify such attacks and temporarily block ranges of IPs to respond effectively.
The Ditto model is interesting. It’s close to what most Twitter-like tools running on top of ActivityPub are doing. Still, while community admin/mod tools are required to run a resilient server, they aren’t necessarily sufficient.
The likely result of Ditto's model is that folks running Ditto servers will soon realise that NIP-05 isn't enough to reliably identify users. We’ll likely end up with islands of Ditto servers whitelisting only other "trustworthy" domains (i.e., other Ditto servers and similar tools with user registration forms and centralised moderation). Don't get me wrong, it certainly works. The Fediverse is brilliant, and it has grown to its current size despite many defederated forks, blacklists, death threats to server admins and developers, etc. I'm a huge fan of ActivityPub and believe people are overcoming these challenges there. However, I hope that the Nostr experiment takes a different direction — at least for the sake of diversity and don’t putting all our eggs in the same basket.
I really like your idea of "user trust" with the right incentives to encourage good behaviour. IME this sort of gamification of user reputation works. Over time, hopefully, we'll have trustworthy users who not only self-manage and report bad behaviour but also actively participate in decision-making within community-managed relays (hence my focus on voting, achieving consensus, etc.). Of course, one step at a time — getting the "reputation" system in place alone is already a huge undertaking, and it's awesome that you're already working on it.
I hope my comments were helpful. As I mentioned before, I'm happy to help in any way I can. The more experiments we run to make Nostr resilient to attacks while still welcoming to new users, the better things will get. 💪