Maybe it makes sense to reuse the decoded objects in the decoding path too? But the impact will probably be much smaller.

Reply to this note

Please Login to reply.

Discussion

yeah, i tried to write a decoder cache, for the network, it was too much complication for probably too little gain

i think a more prospective optimization is isolating the binary and json versions to only on the wire and on the db, so all other work with them stays in the fast native form, the default event struct with hex encoding for id, pubkey and signature is not optimal, honestly they should all be their native type - except the ID should be []byte

the internal version of the pubkey should be the same one that is used in the btcec signature verify function and the signature should be a schnorr.Signature type

changes i long ago put on my mental todo list but forgot

also, yes, reusing buffers is a huge thing... reducing the cost of GC on stuff that you can easily reuse is an easy win

Yes, I wanted to do that too but I don't want to break the API, but then I start thinking there will not be too much gain anyway, as, for example, the signature is only checked once, the pubkey might have to be serialized back to hex for printing anyway, sometimes more than once and so on, so the gains are not super clear.

And if you want faster signature verification your should use github.com/nbd-wtf/go-nostr/libsecp256k1 anyway (or do your thing and refactor them massively complaining about my code in the process), the differences are very big. I don't know why I didn't make those bindings before -- it's weird that no one else had done them either.

i'm not using cgo

pretty sure someone did some bindings ages ago, but not for schnorr signatures, only ecdsa

and yeah, that's pretty lame how the gob codec only talks to readers and bytes.buffer doesn't let you swap out the buffer, that decoder setup is a massive overhead, in the encode step that takes out 800us, and honestly that looks like most of the time the encoder is taking... i think writing a io.Reader that lets you point at a new buffer would change the story drastically

bytes.Buffer is honestly a terrible thing... i also made my own variant of it for the json envelope decoder, it simply has no notion of the idea of pre-allocating the buffer that you know you aren't going to need to grow... growing buffers is a massive overhead cost and in many cases easy to avoid

https://github.com/Hubmakerlabs/replicatr/blob/main/pkg/nostr/wire/text/mangle.go

it includes a bunch of handy functions that let you snip out an enclosed object or array inside another object or array without parsing it, as well as snipping out strings while avoiding escaped strings

some parts of the Go stdlib are written like a soviet commitee but others not so bad, i particularly hate math/big and i'm working with noise protocol recently and the way they have a feature that it automatically appends it to an input parameter is just fucking ridiculous, like, fucks sake, do one thing and do it well, this precise feature blows up the heap and breaks any attempts you make to contain garbage production