> So now, we base the rate limiting on the number of requests per second that the relay can send to the rank provider. It's simpler, more effective, and users are not penalized for being behind an IP group

Nice, this should definitely make UX much better for VPN users!

I am curious though, if you hit that rate limit (say, during a spam wave), how does the relay decide which requests to prioritize? It seems like a spammer could still "crowd out" legitimate requests by jamming the queue. That is one of the key problems I'm targeting: giving the relay a way to distinguish and prioritize higher "bonded" traffic over cheap spam when resources are scarce.

Reply to this note

Please Login to reply.

Discussion

Right, that's a good question. There are different factors at play to mitigate that issue. First, we implemented a preserve stale cache policy in case of failure when requesting a rank. This ensures that users who previously had their rank computed successfully maintain their rank, avoiding transient issues due to spam conditions.

This is not a perfect solution, and we still have some edge cases to cover. However, addressing these will require, as you mentioned, some kind of prioritization, which is a more complex task. For now, the more harmful edge cases are covered, and we will continue to think about this to potentially find a better solution.

Certainly! That should probably cover 99% of cases, so it makes perfect sense to prioritize stability over prioritization logic for now.

Thanks for walking me through how you deal with spam. It’s really helpful to have these real world examples to compare against my theoretical work.