https://github.com/fiatjaf/eventstore/blob/master/badger/query.go#L183

nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6

this might be one of the most opaque pieces of source code i have ever read in my life

put some comments in there to explain why the if/then/else/if/then/else logic because it's not clear at all

i'm spending half my day tomorrow, to try and decipher the meaning of this absurdly opaque piece of code, but you really should not be able to get away with publishing this and not being held accountable

Ask me questions directly and I can answer them.

The code creates a set of query specs from a filter -- i.e. if the filter specifies an author and a kind, we return one spec for traversing the 'pubkeyKind' index at the specified key range, if the filter specifies 4 authors and 2 kinds, then we return 8 specs for traversing the 'pubkeyKind' index in all these different combinations.

Then we gather all the results in a heap and take them in sorted order and return them through the channel.

The heap part can probably be improved, let me know if you have something better. Maybe sacrifice a little bit of strict ordering to get some performance?

Reply to this note

Please Login to reply.

Discussion

i will keep this in my mind as i read it tomorrow

i will just publish the better version, with actual explanation

btw, this is what it means to build in public

we fight with our competitors and allies to get questions answered!

and the public benefits

also, yes, i've hit up hard against this problem with the event store code, because i needed to add a new index/record

just to explain why i'm even bothering

why should i write a new version of the thing if i don't have to?

yes, i can say that you wrote a shit version, because:

you didn't explain it

who knows if it's good or not if i can't understand it????

defend your honor by explaining the algorithm and proving that you are a genius and you are right

i don't mean that sarcastically, at all, i mean that for real

we are programmers

our job is to boil down things to repeatable processes in exhaustive detail that can't be fucked with

I just explained in the other note above.

I don't know why should I bother to explain before you asked because why would I waste time explaining if I don't know if anyone will use this ever? I don't even know if I am going to keep using it in 2 weeks (but, granted, I am using it after many months at this point).

lmao

also, it's a series of if/else if/else if/else if

that would have been better structured as a switch, for a start, for the clarity of the top level

there is three cases if i remember correctly, where two go to two options and one goes to three

some comments explaining the reason why this order of actions would have helped a lot

i will explain it, just as i have also explained the binary formatting of your code in other work

and i won't thank you because decoding bad code is harder than writing it.

tech debt is no joke

you'll understand it when you get older

It's not a series of if/else if/else if whatever. It's as straightforward as it can be, all the verbose bloat comes from badger.

Or maybe you're talking about the part that takes stuff from the heap thing and so on. But that part has a lot of comments.

no, it is not polished code if you have 3 or more conditions and you don't use a switch and you don't explain why there is several conditions... there is a total of some 9 or 10 separate pieces of logic in there and you string them together with if else if else if if if else if else if else

that's why there is a switch, it is much more concise and exact for 3+ cases

no, i'm talking about the function that i linked to, not the heap stuff... i'm not a fan of the heap you used either, making a simple priority queue is not that hard to write with concrete types and no need for all those type assertions

you don't get the benefit of badger if you think making keys with indexes in them, which is not how you do it with most other KV stores, is "bloat"

they are a separate log and so reading them is cheaper, and adding them doesn't require any work on the much bigger value log

badger is the best KV store in the field of data storage and you think it's bloated

unfortunately, that seems to be a common misunderstanding, and everyone goes on about LMDB but that shit is just a fucking giant swap file and depends on kernel to do its magic, and it still has write amplification problems, you add keys you have to restructure the values

not the same as badger, you can write keys all day long and it never has to compact the value log!