Ok so nostr relay Idea is almost done. Its not WOT but a rep system that rewards human interaction and not bot like behavior (I hope).

Everyone starts out on neutral on the relay. Simply passing spam checks boosts rep. Interaction from others reactions, comments, zaps, boosts, all these boost rep, all are equally weighted. Valid nip-05 also boosts rep. Not having a nip-05 does nothing, but having an invalid Nip-05 slowly dings rep.

Spam like behavior also dings rep and can eventually get you kicked off the relay. To be fair, I'm also still trying to work out what is bot like behavior. I'm mostly taking this from the spam filter I made to block the replyguy shit on Freelay. I don't want to anylise the content of a note, not trying to dictate what can and can not be said, but id like to stop bots from annoying people.

One thing I did implement is a 1984 report will affect your rep and if enough people hit you with a report you can be nuked off the relay (im not 100% sure on this choice yet. I know it enables mob behavior or bots could mass report and nuke legit users so i might nuke this aspect or if someone has a better way to use this Im all ears). The idea behind this is so users of the relay could kick people that are posting "objectionable" content. I know that's subjective but for example if someone is posting a bunch of AI porn and if users of the relay don't want to see that they can effectively get it off the relay.

Relay has 2 databases "purgatory" where all notes get stored and looked at, this database is purged frequently. The second database is for trusted users and once you get to a certain trust level you get whitelisted and bypass purgatory. plus, a higher trust score gives you a higher rate limit and more time that your notes are stored.

I know people have a boner for WOT right now, so this idea might not be too popular, but this is my attempt at a more open relay system while still blocking spam. Not sure where it fits in the inbox/outbox model, but I'd like to see nostr remain open without the need of WOT, as IMO as WOT stands its mostly a bandaid for established users. I dont think WOT is bad.....just sloppy as it is right now.

I'm mostly posting this for feedback on the idea, so please rip it apart if you think it's dumb. I know a lot of human behavior can be gamed by someone good at making bots. I would take any advice here. Im still working out kinks. There is also part of my brain that is like "fuck a rep system" as I make this, so im trying to make it as fair and as open as I can.

#nostr #relays #asknostr

Reply to this note

Please Login to reply.

Discussion

A bit like a social credit score right? Interact nicely, and you increase in status.

I hate using the term social credit score, but you're not entirely wrong. Hence why I didn't want to analyze the content of the note, just the behavior of the person posting on the relay.

Like, for instance, a spammy behavior is someone posting from a new npub under a second after a post is made. That's obviously spam.

Yeah, it’s an interesting concept. Will definitely check out your PoC when it’s up and running. Liking all these ideas people are coming up with.

Love this idea. And I wouldn't worry about mob rule so long as it's from legitimate users. That will likely be far and few between where it gets abused. And when it does there's plenty of relays and many self hostable options.

The idea behind that too was also kind of community relays. Like for instance, somebody is using this as a car only relay and somebody is posting bike spam, I use the word spam loosely here, But still, they're posting content that isn't supposed to be on the relay. Legitimate users can get it off the relay.

Sounds way better than WOT. At least you're not shutting out new people by default. I burnd my key started over with this one. I found myself having to bribe people to follow me so I can exist & be seen while all this spam stuff gets sorted out.

It's a cool idea.

If you can calibrate this properly this is better than WoT, tricky though

Yeah, part of the challenge of writing this has been making parameters configurable. Right now the config file has a lot of options. 😅

More khatru voodoo?

Yes lol.

A social credit score? 😂 Jokes aside. This is definitely an interesting idea. I'd be interested to see how this pans out. It may just work.

Once I have a proof of concept relay running I'll post all the code so people can pick it apart. I'm using Fiatjaf's relay framework for this.

Sounds familiar :)

Yes, seriously, thank you for the two databases idea. I don't know why I didn't think about that before, but as soon as you said that, I was like, "oh shit!"

A V4V Score System... Interesting.

I like this idea. It protects against attacks and encourages good behaviour without excluding new users.

Since you have asked for suggestions I couldn't resist writing a wall of text (sorry :)).

Like you, I’m not sure about automatically "nuking" accounts. One thing is certain though: automated moderation should be applied in steps, and banning should be a last resort.

For example, here’s an idea: First, mark all notes from an offending account as sensitive and severely rate limit it (e.g., limit Kind 1 notes to one every 30 minutes or so). Repeated dropped messages due to rate limiting should decrease the account score even further (but be careful here, as I've seen algorithms misbehave due to technical issues outside of the account holder's control). If the bad behaviour persists, stop propagating notes from this account for a fixed amount of time, say 48 hours. Also, record the account’s IP address. If multiple accounts using the same IP are misbehaving, then start dropping all messages coming from this IP for a longer period of time, e.g., one or two weeks.

Permanently banning an account or IP from a relay should be a last-resort manual action. I encourage a mechanism for community moderation, similar to Stack Overflow, so that not all of the onus falls on relay administrators. Community moderation would be more complex and would likely require a new NIP with a few new types of notes. One idea would be to allow trusted/high-reputation users to "vote" on the fate of an account after a certain number of reports. For instance, they could be sent a sample of the account’s notes and aggregate statistics, and vote to either "absolve" the account or impose a longer temporary (e.g., one month) or permanent ban. A minimum odd number of votes (e.g., five) would be required to take action, with the majority ruling. IP bans should probably be left only to moderators and highly trusted users. This group can also manually suspend or unsuspend accounts.

I’ve seen this type of system work well before. It’s highly effective at automatically mitigating spam and antisocial behaviour while giving users a fair(er/ishy) chance and encouraging community moderation. It also avoids Mastodon’s current curse, whith server admins burning out and giving up due to the sheer volume of moderation work on their plates.

Hopefully, this is helpful. I understand that such a system would be complex to implement and still vulnerable to abuse (community moderation is far from a solved problem). However, like most people-related issues, it’s a complex challenge that requires thoughtful solutions.

Let me know if I can help in any way.

Ok so I should have explained the rep system a bit more. So far there are 4 trust levels untrusted, neutral, positive and trusted. Each level has different limits. If you maintain untrusted status for a set amount of time (default is a week but its configurable) you get dropped. So you don't get kicked off the relay right away. There is time to recover.

On your first example my only pushback is I don't like logging IP addresses for user privacy reasons and try to avoid it when I can, but IPs are also public so /shrug I might do this but right now I'm not. I kinda like the idea of lowering rep for repeated rate limit failures. This could help with bots that are not necessarily bad but post a lot if random stuff IE I've seen some bots that repost MSM news every minute or so, if no one was interacting they would be dropped eventually.

The community mod thing kinda sounds like what ditto does, but it's all on the relay admins to make the final decision on if they get kicked or not, but it's all based on user reports. (it's not a voting system like the one you describe, though) and that was kind of my attempt with the 1984 reports. They are weighted a bit stronger that normal things like spam check failure, and invalid Nip-05's

I'm debating making an admin interface that has things like banned Npubs or something so it's easy for admins to unblock people if they want. But right now it's not something I'm super worried about.

Makes total sense, nostr:nprofile1qqsyfhqu9kuu877hhm5j2lkwk5478nuvgza00d3lgmjkkk9px8r57zcprfmhxue69uhkvun9v4kxz7fwwdhhvcnfwshxsmmnwshszxmhwden5te0w35x2en0wfjhxapwdehhxarjxyhxxmmd9uqsuamnwvaz7tmwdaejumr0dshszy0a9p. Thanks for replying to me and for the clarifications.

I honestly think IP logging is unavoidable. For example, think of the "EmojiGuy" attack, which bypassed spam filters. I know that we can always build increasingly sophisticated spam filters, but It's a game of cat and mouse — bad actors will find ways around even the most advanced systems.

EmojiGuy wasn’t even rotating IPs, only keys, and still managed to create chaos on multiple relays. Now imagine "EmojiGuy 2.0" using IPv6, rotating IPs over a /48 or /64 subnet. Then consider "EmojiGuy 3.0," spamming from a gazillion different IPv4 and IPv6 addresses. We’ll need a quick way to identify such attacks and temporarily block ranges of IPs to respond effectively.

The Ditto model is interesting. It’s close to what most Twitter-like tools running on top of ActivityPub are doing. Still, while community admin/mod tools are required to run a resilient server, they aren’t necessarily sufficient.

The likely result of Ditto's model is that folks running Ditto servers will soon realise that NIP-05 isn't enough to reliably identify users. We’ll likely end up with islands of Ditto servers whitelisting only other "trustworthy" domains (i.e., other Ditto servers and similar tools with user registration forms and centralised moderation). Don't get me wrong, it certainly works. The Fediverse is brilliant, and it has grown to its current size despite many defederated forks, blacklists, death threats to server admins and developers, etc. I'm a huge fan of ActivityPub and believe people are overcoming these challenges there. However, I hope that the Nostr experiment takes a different direction — at least for the sake of diversity and don’t putting all our eggs in the same basket.

I really like your idea of "user trust" with the right incentives to encourage good behaviour. IME this sort of gamification of user reputation works. Over time, hopefully, we'll have trustworthy users who not only self-manage and report bad behaviour but also actively participate in decision-making within community-managed relays (hence my focus on voting, achieving consensus, etc.). Of course, one step at a time — getting the "reputation" system in place alone is already a huge undertaking, and it's awesome that you're already working on it.

I hope my comments were helpful. As I mentioned before, I'm happy to help in any way I can. The more experiments we run to make Nostr resilient to attacks while still welcoming to new users, the better things will get. 💪

A not-at-all-thought-through idea that came to mind just now when reading this. Maybe the "vote to kick" system could be a betting market:

You vote for/against kicking an account with sats. If majority of the votes (sats) vote for kicking that account, the ones who voted for kicking get the sats of those who voted against it. And vice versa...

Not sure how/if this would work in practice, just a random idea.