Benchmarking Nostr event parsing in JavaScript ("leaner" means a very simple and performant binary format):

If these results are right (I'm not sure they are) that means JavaScript is very not-great-at-all dealing with binary things and if Nostr events were designed to be binary from the get go that would entail a massive performance loss for web, react-native and JavaScript clients of all sorts.

Reply to this note

Please Login to reply.

Discussion

The code is at https://github.com/fiatjaf/nostr-json-benchmarks

(as you can see, the "leaner" codec is much faster than anything else when running on Go.)

The is code does not seem optimal.

Avoid using ‘data.buffer.slice()’ as it allocates memory and creates a copy.

Instead use ‘new DataView(data.buffer, byteOffset, byteLength)’ to reference the same buffer.

The javascript code does not seem optimal.

Avoid using ‘data.buffer.slice()’ as it allocates memory and creates a copy.

Have you tried using ‘new DataView(data.buffer, byteOffset, byteLength)’ to reference the same buffer?

If you need more performance, you may also use ‘new Uint8Array(data.buffer, byteOffset, length)’ which is common practice.

No, I did not try. I don't understand why can't we just have a simple byte array and instead have all these weird types. Please send a pull request.

OK, I did what you suggested and some other small optimizations. I wasn't expecting .slice() to do a copy. Now the performance is much better, but still more than 2x slower than JSON.parse and NSON.

I know that benchmarking on the JVM is notoriously hard, because of the JIT compiler and various other non-deterministic factors. So given that most JS engines are also JIT compiled, perhaps the same is true here?

Someone may compile a go/rust solution to use with Wasm for browsers

I see that Deno runs JSON.parse at around 1800ns/op.

I can't understand which of the results in the README you're comparing it to.

If binary formats don't give significant performance increase I see no reason to use them.