yes I tested it:
36746 39742 zstd beats it: 35530 but you still have to un-zstd and json decode..
yes I tested it:
36746 39742 zstd beats it: 35530 but you still have to un-zstd and json decode..
i'll try to see how much faster it is with benchmarks
Yeah, those numbers make sense. I don't know, I feel like beating a dead horse when we try to beat compression algorithms.
another nice thing is that this format ensures the id is at the start, meaning its very easy to reject parsing/verifying the entire thing when when checking if you already have it:
for field in packed_note {
// first field peeked when looking at the bytes
if let ParsedField::Id(id) = field {
if cache.contains(id) {
break;
}
}
}
I had to really hack this in my json parser on nostrdb to get this working
its not bad nostr:note1t3cnac2fqr4wcl6a8eve63q3577e77p8rgrgpak8mn77py4dhnfqxwrnyc