Using the inbuilt Apple JSON support? Maybe we can try swap out to a faster external library?
The other major issue I see is websockets have no congestion management - it’s event by event, and a large event blocks all the smaller ones behind it.
Using the inbuilt Apple JSON support? Maybe we can try swap out to a faster external library?
The other major issue I see is websockets have no congestion management - it’s event by event, and a large event blocks all the smaller ones behind it.
This? “IkigaJSON is a really fast JSON parser. It performed ~4x faster than macOS/iOS Foundation in our tests when decoding a type from JSON.”
I tried to switch to this and it was much slower
Damn. Have you seen fiatjaf’s NSON proposal?
Yeah I don’t get it, I don’t know why it’s not just a new wire format. I have already replied on the PR about it.
i tried a branch where I built my own custom json decoder, but you still need to walk every byte in the string and unescape things. json decoding is not trivial, where a TLV format would be as simple as copying bytes directly.
I just realized this isn’t true, since the json encoded string is signed you would be forced to decode json strings regardless. Damn this probably isn’t worth it then.
Nm I’m dumb the decoded string is signed . I should have coffee before I start thinking of new data formats.
oof
Do you actually need to check the signature in this situation?
Right now relays are one of around five or so implementations. The issue is at some point when we end up with many more satellite (smaller) relays, malicious relays (or even software bugs) will start to appear.
Maybe trusted relays could have their events skip validation - however, at any moment validation could become really important to enforce client side.
You could run the validation async on the phone (e.g. first get data and then on demand verify signatures as the user scrolls through feed and maybe show a little checkmark when the verified and show red warning when not)
Yeah, on this note, first JSON decoding of multiple events can be trivially parallelized and second the rapidjson C++ lib is incredibly fast.
Finally, the biggest performance hit with this type of work is the heap allocator being used to allocate new memory for each object being decoded.
If you pre-allocate a bunch of space for the decoder output and use a format like rapidjson where the entire result uses one contiguous memory region it’ll be blazing fast without any need to make architectural changes to the protocol.
On second thought I do retract the “trivially parallelized” part. Parallelizing will add extra complexity and friction, so it’s not zero cost. But I stand by my other point of the heap allocator being the main culprit