i don't have a lot, never done it professionally, nearly got my Network+ back in 2016, been using unix shells since 1995 and i can write adequate scripts and dockerfiles

as a dev, i'm very interested in the structural limits of hardware though... when i was a kid i saw too many amiga demos and what those crazies could do left a big stamp on me

like, on the subject of disks, i know that badger has configurations that can reduce the rate of flushing the log and compactions, and kernels can be tweaked to delay flushing the cache to disk for this, but eventually it does have to get written

and regarding bitcoin-core and indeed btcd, their database implementations leave a lot to be desired (and they are both running ancient leveldb) and all the hype about strfry when it is basically using leveldb *with bonus memory mapped storage* it is so horribly yawn to me i can't stand it

since 2016 there has been the Wisckey paper that demonstrated that splitting key/value stores into two separate logs drastically reduced the amount of disk writes required and make it so you could engineer databases to make more use of the key fields, which are preferentially kept in memory and flushed infrequently, and this leads to dgraph, which as far as i know is the best performing graph database

i did mean to build a badger driver back in 2019 for btcd but i never got around to it... but btcd's performance is so abysmal i would never use it for production, and that's the default for LND, which to me just reinforces your point - most devs have little to no understanding of the hardware they are writing code to run on

Reply to this note

Please Login to reply.

Discussion

No replies yet.