ah, just to explain how you do things with badger, because it differs from most other key/value stores due to the separation of key and value tables...
because writing values doesn't force any writes on the keys, keys stay in order a lot more, generally, once compacted, forever compacted (compaction is playing the log out to push it into an easily iterated, pre-sorted array)
as a result, the best strategy with badger for storing any kind of information that won't change, and needs to be scanned a lot, you put, very often, values in the keys for such immutable stuff, such as tombstones
it's also used for searching, as you would expect, but this is the reason why when you use badger (properly) to write a database, it's so much faster. it doesn't have to skip past the values when its' scanning, and you don't have to re-compact the keys when you change values (and yes, it of course has versioning of keys, i don't use this feature but in theory there is often some number of past versions of a value that can be accessed with a special accessor for this, but more generally it makes the store more resilient, as you would expect)
so, yeah, current arrangement for tombstones in realy is the first (left, most significant) part of the event ID hash is the key. finding it is thus simple and fast, just trim off the last half and prefix with the tombstone key prefix and even you can just use the "get" function on the transaction instead of making a whole iterator, so it's very neat, and very fast.
i also exploit these properties of badger key tables with the "return only the ID" functions by creating an index that contains the whole ID after the event's serial number, which means the event itself doesn't have to be decoded for this case, which is a huge performance optimization as well.