garbage collector
you track each event's last access time and then when you do a GC run you grab all the event serials, along with their last access timestamp, collect the size of each of these events, sort them from lowest to highest, total it up, then count off as many events of the oldest last access that hits your "low water mark" target and then delete them
in badger db i did this in a "batch" transaction which uses a divide and conquer to break the operation into a series of parallel operations (like, ideal is number of CPU threads) and it happens so fast
by doing it this way, you solve multiple problems with one action, and events that haven't been accessed in a long time are the best candidates for removal... old replaceable events will naturally fall into that because clients only mainly want the newest version and most of the time accessing old versions would be an infrequent event and more often you would only want the next oldest or so anyway, so they will expire from the cache without any extra rules or logic