also, the 20% slower decode may be purely the product of my data size optimization, it most likely is trading off the difference of the size of all those insane follow lists for a nearly 50% reduction in binary event size, if so, that's cheap for ~45% compression of the most bulky events in the data set, they will be going from 100-500kb down to 50-250kb, that's not trivial when you consider how many of these events there are (probably about 1% by count and 15% by size