ok, so, i removed all the logging bits in my encoder because i discovered that it was a massive part of the benchmark timing information (and performance), and i was pleased to see this result:

cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics

BenchmarkBinaryEncoding

BenchmarkBinaryEncoding/event2.EventToBinary

BenchmarkBinaryEncoding/event2.EventToBinary-12 788 1467852 ns/op

BenchmarkBinaryEncoding/gob.Encode

BenchmarkBinaryEncoding/gob.Encode-12 94 13787646 ns/op

BenchmarkBinaryEncoding/binary.Marshal

BenchmarkBinaryEncoding/binary.Marshal-12 48 24622387 ns/op

BenchmarkBinaryDecoding

BenchmarkBinaryDecoding/event2.BinaryToEvent

BenchmarkBinaryDecoding/event2.BinaryToEvent-12 600 1957643 ns/op

BenchmarkBinaryDecoding/gob.Decode

BenchmarkBinaryDecoding/gob.Decode-12 27 43210743 ns/op

BenchmarkBinaryDecoding/binary.Unmarshal

BenchmarkBinaryDecoding/binary.Unmarshal-12 852 1379193 ns/op

I've slimmed it down to mine, nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 and the built in Gob decoder from the Go standard library

my binary encoding is more than 20x as fast, i did not expect it to be that much faster, but ok, not sure if that's because i reused the decode write buffer

the decoder is 600 ops versus 852 ops, so it's slower but not that much slower

here it is again except with using 10,000 of those sample events, numbers much lower of course but may be more accurate due to statistical variations

cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics

BenchmarkBinaryEncoding

BenchmarkBinaryEncoding/event2.EventToBinary

BenchmarkBinaryEncoding/event2.EventToBinary-12 139 8155043 ns/op

BenchmarkBinaryEncoding/gob.Encode

BenchmarkBinaryEncoding/gob.Encode-12 19 62083345 ns/op

BenchmarkBinaryEncoding/binary.Marshal

BenchmarkBinaryEncoding/binary.Marshal-12 9 112449500 ns/op

BenchmarkBinaryDecoding

BenchmarkBinaryDecoding/event2.BinaryToEvent

BenchmarkBinaryDecoding/event2.BinaryToEvent-12 100 13966114 ns/op

BenchmarkBinaryDecoding/gob.Decode

BenchmarkBinaryDecoding/gob.Decode-12 5 223875643 ns/op

BenchmarkBinaryDecoding/binary.Unmarshal

BenchmarkBinaryDecoding/binary.Unmarshal-12 121 9243299 ns/op

the encoding is a little less, around 14x the decoding, however, comes out to only 20% less in this test, which is an extra 5%

anyway, everyone knows boys like pissing contests, i'm no different

one thing this has revealed to me is that for purposes of benchmarking, at least, i need a way to disable the logging more completely, this is with all logging overhead utterly removed, the difference is amazing and shows how expensive my logging is in runtime throughput

for comparison with the second set from 10k events, this is with my logging library enabled:

cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics

BenchmarkBinaryEncoding

BenchmarkBinaryEncoding/event2.EventToBinary

BenchmarkBinaryEncoding/event2.EventToBinary-12 6 193251993 ns/op

BenchmarkBinaryEncoding/gob.Encode

BenchmarkBinaryEncoding/gob.Encode-12 19 63600374 ns/op

BenchmarkBinaryEncoding/binary.Marshal

BenchmarkBinaryEncoding/binary.Marshal-12 9 141026239 ns/op

BenchmarkBinaryDecoding

BenchmarkBinaryDecoding/event2.BinaryToEvent

BenchmarkBinaryDecoding/event2.BinaryToEvent-12 9 120024419 ns/op

BenchmarkBinaryDecoding/gob.Decode

BenchmarkBinaryDecoding/gob.Decode-12 5 232574356 ns/op

BenchmarkBinaryDecoding/binary.Unmarshal

BenchmarkBinaryDecoding/binary.Unmarshal-12 140 9325969 ns/op

Reply to this note

Please Login to reply.

Discussion

this is now the result after stripping the fek out of my logger library, just got some hassles with go module versions

cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics

BenchmarkBinaryEncoding

BenchmarkBinaryEncoding/event2.EventToBinary

BenchmarkBinaryEncoding/event2.EventToBinary-12 141 8038303 ns/op

BenchmarkBinaryEncoding/gob.Encode

BenchmarkBinaryEncoding/gob.Encode-12 18 61665344 ns/op

BenchmarkBinaryEncoding/binary.Marshal

BenchmarkBinaryEncoding/binary.Marshal-12 9 111623125 ns/op

BenchmarkBinaryDecoding

BenchmarkBinaryDecoding/event2.BinaryToEvent

BenchmarkBinaryDecoding/event2.BinaryToEvent-12 99 13979812 ns/op

BenchmarkBinaryDecoding/gob.Decode

BenchmarkBinaryDecoding/gob.Decode-12 5 216066451 ns/op

BenchmarkBinaryDecoding/binary.Unmarshal

BenchmarkBinaryDecoding/binary.Unmarshal-12 122 9004904 ns/op

now i can focus on the BinaryToEvent code and remove that difference 😁

i'm getting 400 errors after trying to up a new version, the git http server is reporting bogus packets, then it tries to decode gzip and it also fails because the gzip header is mangled, this is the message being sent by go tool...

not sure what to try now... maybe a different go version, seems like 1.22.3 might be buggy

oof, no beans with 1.22.2 either, what the actual... this was working just days back, now i try to put a new version and suddenly a bug i thought i'd fixed with legit is back again, bogus encoded request from the fucking go tool, or bogus decoding from the

dammmmmitttt

this must be a go.work or go.mod replace error

hmmmm maybe the legit version on my thing is broken...

bah, i wanted to do something else not debug legit again, i swear i fixed this gzip encoding bug before but evidently not

maybe it's time to go have a coffee

more probing shows that the problem seems to be more likely a bugged git repo that has been mangled

the repo clearly shows other tags than the ones go is allowing but it's pooping the cradle on anything later than an old version, which tells me that there might be some glitch with the actual git tags or someshit

gonna come back and fix it after coffee, have not had any problems with bumping new tags until this one, it's special

yes, confirmed, i wiped the existing bare repo on the host, and upped the previous version and the new version with the minimal, fast logging code

yes, worked, directly.

git is buggy, hurrah

also, second check of the other cafe down the road... seems like it hasn't been open in a while

methinks the lady i spoke to yesterday doesn't really know what her offspring is up to these days