Maintaining my own Relay code base is not really fun :( I might give up on it. Which implementation is the best for a beefy server right now? #[0] what are you running for the damus relay?
Discussion
nostr-rs-relay. It’s fast!
https://github.com/scsibug/nostr-rs-relay is SQLite, only. #[3] do you have plans to support other DBs?
I'm tempted to do some benchmarking. Would be a good baseline for future relay grants ;)
I've thought about it. it would be a lot of work.
No plans to support other DBs. Would be open if someone wanted to do the work to make it pluggable; but I am partial to it being the best SQLite relay out there, not necessarily the best for all DB's.
Fair enough. But if relays store all historical data then it’s going to be a pain to be a mega relay with a SQLite based backend. SQLite isn’t exactly built for scale.
I figure it can scale "big enough" for a decent sized community to use, and hopefully will be more than usable on a $5/month tiny VPS. But I'm also eager for first-hand knowledge of how far SQLite /can/ scale.
Ok, good to know. I'm shopping for options what to put on my beefy server (8vCPU, 32GB RAM) and assumed sqlite might not be the solution. On the other hand, all nostr events still fit comfortably into RAM :D
As of now, my relay spits out stats like below every 10s and these 62 subscribers have the machine very busy with 8 CPU maxed out, 40mbit/s for hours, ...
```
Sun Dec 18 23:56:01 UTC 2022: pinging all sockets. 322MB / 490MB free.
62 subscribers maintain 62 channels and are monitoring these queries:
21 times kinds[1, 2], since
19 times kinds[1, 3, 4], tags, since
19 times authors, kinds[0, 1, 2, 3, 4], since
19 times ProbabilisticFilter( kinds: [0] ids/authors/tags: 1000s )
12 times authors, kinds[3]
7 times kinds[0, 1, 2, 7], since, limit
7 times kinds[3], tags
7 times kinds[1, 42, 7, 6], tags, since
6 times authors, kinds[1]
6 times kinds[1], limit
5 times kinds[4], tags, limit
5 times kinds[4], tags, since
5 times authors, kinds[4], since
4 times authors, kinds[4]
4 times ProbabilisticFilter( kinds: [1, 42, 7, 6] ids/authors/tags: 100s after )
4 times ProbabilisticFilter( kinds: [0] ids/authors/tags: 100s after )
3 times authors, kinds[0]
3 times ids, kinds[1], limit
2 times since
2 times kinds[0, 1, 2, 3, 4], tags
2 times ids
2 times authors, kinds[0], since
2 times tags, limit
2 times ids, limit
2 times kinds[1, 42, 7, 6], tags, limit
2 times ProbabilisticFilter( kinds: [1, 42, 7, 6] ids/authors/tags: 1000s after )
2 times ProbabilisticFilter( kinds: [0] ids/authors/tags: 1000s after )
1 times ProbabilisticFilter( ids/authors/tags: 100s after )
1 times tags, since
1 times NoMatchFilter
1 times ProbabilisticFilter( ids/authors/tags: 100s )
1 times authors, kinds[1, 42, 7, 6], since
1 times kinds[1, 42], until, limit
1 times authors, kinds[1, 42, 7, 6], limit
1 times ProbabilisticFilter( kinds: [1, 42, 7, 6] ids/authors/tags: 100s )
1 times ProbabilisticFilter( kinds: [0] ids/authors/tags: 100s )
1 times ProbabilisticFilter( kinds: [0] ids/authors/tags: 10000s )
1 times authors, kinds[1, 42], limit
1 times authors, kinds[3, 0, 6]
1 times authors, kinds[1, 42, 5, 6, 7], since, limit
1 times kinds[1, 42, 5, 6, 7], tags, since, limit
1 times authors, kinds[4], limit
1 times kinds[1, 42, 5, 6, 7], until, limit
1 times ProbabilisticFilter( kinds: [1, 42, 5, 6, 7] ids/authors/tags: 1000s )
82721 Events sent in 3012ms.
14 Events received via Websocket.
50 Channels closed.
19 Sessions closed.
```
https://github.com/scsibug/nostr-rs-relay
Available on docker as well.
Very light, runs comfortably on a 2 core 2GB RAM VPS with a tiny SSD.
For personal use, sure but who ever did a benchmark to see how many big queries per second it can handle and how many websocket subscriptions?
Seen this https://github.com/Cameri/nostrillery? I haven’t used it yet.
I'd be interested to know more about that
Currently it's a paint to debug as I make changes locally, push the commit and on the server I kill the process, pull changes, compile/run and I haven't figured out how to debug with pausing the code, checking what went wrong, breakpoint in catch block ...
Maybe a log aggregator might help
Logging is very much inferior to being able to debug execution as it happens.
My logs show where it crashes but the Exceptions ... I can't tell from the stacktrace how to ignore the ones that fail to insert an event due to being duplicates. Sounds trivial but lib has no upsert.