nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 your binary coder is 193us/op versus gob 240us/op it's not that big a margin

the one in binary.go is the fastest

personally, i think if you want to squeeze it a bit faster, consider using reflect to force-re-type those integers (they will go to whatever your hardware endianism is, which is opposite to BigEndian on intel/amd)

the opportunity i see for big performance increase is moving all that hexadecimal encoding to the network side only and everything internal being bytes

It's a giant margin! 20% faster!

But thank you for the tips.

Reply to this note

Please Login to reply.

Discussion

you wrote the test wrong, gob is actually 25% faster

Please send me your benchmark code.

i put it in another reply

all i did was move the create codec step to outside the loop

b.Run("gob.Encode", func(b *testing.B) {

var buf bytes.Buffer

enc := gob.NewEncoder(&buf)

for i := 0; i < b.N; i++ {

for _, evt := range events {

enc.Encode(evt)

// _ = buf.Bytes()

}

}

})

also as you can see to not actually access the buffer using buf.Bytes()

neither of these steps really reflect any distinctive part of the process, but i doubt that step takes more than a dozen nanoseconds

Don't you have to .Reset() the buffer or something like that? Otherwise you'll be appending to the same buffer over and over and it will get huge and reallocate all the time.

well, make the change, it's way faster, but yes, probably it should be doing buf.Reset()

i'm probably gonna have to write a better bytes.Buffer now after seeing this shit

also, maybe bufio is the droids we are looking for here