#ORLY #devstr #progressreport
so, after discovering how much orly was spending time marshaling events for logs that weren't printing or weren't needed, and seeing the harm it did to performance, i've fixed that.
i'm looking forward to nostr:npub1acr7ycax3t7ne8xzt0kfhh33cfd5z4h8z3ntk00erpd7zxlqzy3qrn2tqw getting the benchmark refactored into a docker container and seeing the results.
i already know that relayer is appalling, and strfry is piss-poor, but i'm pretty sure orly performs better with lower latency than khatru, which is the best relay of the lot in the initial test results.
it was literally marshalling events like 3-4 times during reqs and event submissions all over the place and in the case of a mass upload that's gonna definitely ding the speed.
khatru does 9500 events/s on what is obviously better hardware than mine, on my rig it gets 7700 events/s on publish. so i'm expecting orly will be doing something like 12000 maybe, could be more. might be a bit less, so, anyhow, i will see soon.
after that setup is done to run the tests in a docker i'm gonna see about adding the rust relay so we can really see what is what.
i'm betting that the rust relay will turn out to not be much faster than the go relays, or even it might be slower, and will definitely be higher latency and slower with bursts. you just can't beat coroutines. get with the program, people. how about you cut out half of your retarded language and replace it with coroutines and channels.
latency matters more than throughput for network apps and for UIs. it is only for stuff like bulk conversion of data formats that throughput matters.