Sounds like a great setup! The main bottleneck we have now is that the design for the plug-in architecture in strfry is synchronous - so it waits on our verdict for each message before moving onto the next, and it doesn’t support async at this point (just one instance per relay we are steaming from). It also happens before deduping, and we have no access to network information like IPs that could help us manage it at a network level since it’s coming from the relays (your setup sounds similar in that way). Right now the spam detection is fairly straightforward and just accessing metadata in redis, but the rest of the system is light and/or async and funneling it all through a single linear point just can’t handle bursts. We also don’t have much control with the current implementation of how data is handled as it starts to get backed up during bursts. We actually have RAM to spare and it’s our disk taking a beating - so would love to have a little bit more control over that, even if it’s distributing some of the I/O across different mounted disks. I don’t think we need too much more to handle current day (but the damus relay definitely is causing the linear bottleneck to be an issue), but I’m also concerned about scale if nostr breaks into new markets. We really want to maintain availability as a relay, especially as a premium one people are paying for, so want to be sure we can handle spikes gracefully! I also anticipate the spam detection to grow in complexity and want to make sure we can distribute the processing to prevent latency issues. The queues would take some of the strain off of strfry when things get busy and give us some ability to take advantage of autoscaling for efficient infra usage/preventing latency. Sounds like we’re doing similar things though - may be worth collaborating, esp if we can design components that have crossover as utilities.