yeah, the event store interface is kinda wrong too. it should be sorting the results before sending them, just not depending on the insertion time of the record to be a match with it (this breaks when someone uploads a bunch of old events to a relay). i mean, at least, where sqlite or postgresql or others are used, the results are sorted in the engine, but the badger engine that fiatjaf made does not, and it runs WAY too many threads.

create an issue on the repo and tag me as assigned to the issue of adding a badger based DB engine implementation. i'll get to it in the next month. it's not a difficult task, just looking at the interface, creating all the stub implementations of the methods and then putting the right bits into them based on the existing engine i have built. probably a day or so work maybe.

Reply to this note

Please Login to reply.

Discussion

> i mean, at least, where sqlite or postgresql or others are used, the results are sorted in the engine, but the badger engine that fiatjaf made does not

oooof, not good.

> and it runs WAY too many threads

typical...

> create an issue on the repo and tag me as assigned to the issue of adding a badger based DB engine implementation

That's great, thank you sir, I'll do ASAP!

just following up, the implementation is done and a PR is awaiting you

i added a few helpers and whatsits into #orly tag code mainly that will also help with transitioning.

orly event and filter codecs use binary data where the field represents binary data. event IDs, pubkeys, signatures all are in that form already after they have been unmarshaled. i think it probably helps a little with memory and speed since comparing a byte is faster than comparing two bytes.

anyway, i hope it helps you make your analytics even more efficient, i have it in my mind to look into it, after a few people have put it into their apps. now i'm gonna be in the credits i want to know what the show looks like :)

Thanks mate, I'll look into it tomorrow.