for me, obviously, part of it is going to be about being vindicated about my methods and choice of language. raw throughput, concurrently, will show a lot but the profiling and latency measures are also very important.

i refactored the database to perform the queries entirely in one thread, and made the transactions smaller so that they should splice together concurrently well. this is one thing that i want to get right, i will probably be doing more profiling on the concurrent random querying to find where the bottlenecks are. to be honest i feel like there probably is some, and probably because of some relatively new code that was almost completely written by an LLM coding agent. i already noticed a lot of random and stupid stuff that was in code it generated so i'm sure there is some low hanging fruit there too.

it's my opinion that the less latency there is in request processing, the broader the use cases. stuff like ContextVM and possible realtime collaboration protocols need this to be optimal.

i had really bad experiences working with the khatru codebase when i forked it 18 months ago. there was way way too much goroutines competing for resources and it literally ended the possibility of any further work through the sponsor who paid for that work. most of the code i wrote has ended up in ORLY tho. i wrote a lot of fast encoders and more recently experimented with better database index designs. but it's possible that there is contention in there or maybe there could be more concurrency. anyhow.

yeah, i'm gonna work on the script to run the tests, anyhow right now. i would like to see that working before i finish my day today.

Reply to this note

Please Login to reply.

Discussion

No replies yet.