How does badgerdb compare to leveldb?
Discussion
it has a separate key log which means you can iterate keys independently and add/change them without impacting the value log, which is where you get "write amplification" which has to do with the fact that the value tables are much bigger and cost more to restructure and append to
this makes it much more viable to build powerful search indexes by simply using the keys, which is exactly how the badger eventstore that fiatjaf wrote works to implement filter searches, there is like 8 different kinds of keys that let you match fast to authors, timestamps, and so forth, as well as a scheme to add a key for each tag in an event
implementing fast range searches with badger would be much easier to do and not have problems with large data sets due to this feature
i was avoiding paying attention to this part of the codebase from the eventstore/badger implementation but right now currently cleaning the hell out of it and restructuring it a little... in several places it needlessly trims down 64 bit values to 32 bit, which means that the database has maybe got a 2 year lifespan before the primary record keys overflow, and the timestamps... also shortened to 32 bit which means they also overflow circa 2038
not that these are big concerns right at this minute, for a high time preference programmer but for a serious, long-term codebase this is exactly the sort of thing that cannot be allowed to continue
by the time these numbers run out, at the 64 bit size, the potential size of storage capabilities is far beyond what currently will blow up in 2 years time with a mini y2k bug