Yeah we were rocking libp2p but compatibility issues are real.
This TLS with key support will be nice for websockets transport. đ
Yeah we were rocking libp2p but compatibility issues are real.
This TLS with key support will be nice for websockets transport. đ
I mean it wonât work in a browser anyway? Noise definitely the way to go if youâre building something without compat, and this wonât get compat anyway
True, itâs not any more compatible if browsers donât adopt it â perhaps just easier to implement?
I imagine it will be easier for new app developers to integrate basic TLS with key support than setup noise or libp2p (so their apps connect to nostr:npub1h0rnetjp2qka44ayzyjcdh90gs3gzrtq4f94033heng6w34s0pzq2yfv0g nostr relays).
Still going to offer libp2p QUIC support for apps that want to go the extra mile (TLS is built-in without CAs).
BTW it was defined in https://datatracker.ietf.org/doc/html/rfc7250 (but I think that was TLS 1.2 days), and TLS 1.3 https://datatracker.ietf.org/doc/html/rfc8446 mentions it.
TLS is a really bad protocol if your doing something greenfield. Please donât ever use it unless youâre stuck in the web browser world.
Libp2p QUIC with npub is faster than websockets with DHKE noise.
In the case of Nostr, Libp2p QUIC provides better security against MITM attacks⌠if you know the relayâs npub and can establish an encrypted connection with it. Npub is used as the Libp2p ID.
If you donât know the key to the nodes youâre connecting to then noise is indeed the way to go â ephemeral key generation â given you canât use their known npub to stop MITMs. CAs were made to stop MITMs especially â this gives us our own way of doing it, if you have the relayâs key from a trusted source beforehand.
Iâm incredibly, incredibly skeptical that with the amount of data weâre talking about you can even measure the difference in performance on a LAN, let alone the internet.
The point about MITMs is a bit more important than the speed. :-)
Sure, it might be a tiny difference in speedâŚ
QUIC is known to have less round-trips than normal TLS, which means itâs definitely faster than websockets+noise â benchmarking isnât necessary. 
AFAIR QUIC has the same number of round trips as normal TLS if you set the TCP options right. Basically it shaves off RTs because it begins the TLS handshake in the SYN. You can do that with TCP, too, doubly so if you arenât using a TLS library that sets socket options for you. The claim in your diagram that you need 0 full RTs to do QUIC setup is nonsense, thatâs just if youâve spoken to the server before and it has cached keys, but the 0 RTT TLS stuff isnât being implemented in generic HTTP stacks because of replay issues.
You could theoretically tailor TCP + TFO + Noise to achieve 1 RTT, but that sounds like a headache to implement. If any pre-made libraries are available with that setup, drop a link! đ
While itâs true that QUICâs 0-RTT mode isnât widely used due to replay attack risks, libp2p QUIC achieves 1 RTT with encryption, which is still faster than typical WebSockets over TLS (3 RTT).
Whatâs neat is that libp2p exchanges peer IDs during the QUIC handshake, meaning MITM attacks are mitigated if youâve already retrieved the relayâs key from a trusted profile.
Why do you dislike QUIC/TLS so much if itâs free of CAs? How does it compare to TCP+TFO+Noise?
TLSâ only issues arenât just CAs being a mess, itâs also an anachronistic protocol that just isnât how youâd design something today. 1.3 is better, sure, but it carries tons of legacy garbage and most clients still have fallbacks and logic for it.
I also dislike QUIC for being a lazy, poor version of TCP. Middleboxes suck sometimes but sometimes do useful things (eg on a plane TCP is terminated before it goes to the sat, improving latency and throughput compared to UDP things with retransmissions, middleboxes can use MSS clamping to avoid fragmentation, etc). QUIC largely failed to consider these things and just said âscrew all middleboxesâ (compare to eg tcpinc which got similar encryption properties without being lazy). QUIC exists to rebuild TCP in user-space cause kernels sometimes suck, but for those of us with an operating system newer than five years old thatâs not a problem we have. Worse, sometimes your OS has specific useful features (eg MP-TCP) that you donât want twenty apps to have to rewrite. FFS this is literally the point of having an OS! The only promise QUIC made that isnât as trivial in TCP is FEC, but they gave up on it causeâŚI dunno why.
Note that QUIC is useful on the web for helping to and the multi-connection/head-of-line blocking problem. But if you arenât fetching a bunch of resources from the same server across different streams where you can use each resource individually on its own this doesnât apply (and it requires integration work to make it apply).
Errr avoid the multi-connection (and associated initial small window sizes)/head-of-line-blocking tradeoff.
We are indeed fetching a bunch of resources from the same server across different streams.
Libp2p allows us to use multiplexing so we can open as many bi-directional streams as we want over a single connection, it's awesome. We use it for Airlock (permission system for the decentralized GitHub).
I understand your point about the OS handling TCP instead of each app handling networking individually, which does make a lot of sense. I wish there were a plug-and-play TCP+TPO+Noise library that could handle multiplexing! Would be a nice addition to include in libp2p.
TFO*
I mean if you drop the TFO requirement itâs easy - just open many connections. But just fetching many resources isnât sufficient to want QUIC - you have to be doing so immediately after opening the connection(s), the resources have to be non-trivial in size (think 10s of packets, so the text of a note generally doesnât qualify) and have a need for not blocking one on another, which is generally not the case on a mobile app - server can send the first three things you need to paint the full window first and then more later.
Itâs a desktop app for decentralized GitHub on Nostr. The amount of data is non-trivial in size (sometimes). Repos can be large. This is why weâre using merkle tree chunking for large files as well. I want the reduced RTT.
Itâs just a head-of-line-blocking question, thoughâŚI imagine mostly youâre not downloading lots of CSS/JS/images, which is the big head-of-line issue HTTP clients have - they can render the page partially in response to each new item they get off the wire.
I assume you donât, really, though? You presumably get events back from the server in time-order and populate the page down in time-order, so mostly one event stalling the next wonât make all that much difference in UX?