What is the difference between p2p and client-server? What people mean by p2p has always been multiple things, and it clouds my thinking when trying to reason about what people are saying.

Every network connection has an initiator. If we call that side the client, then every network connection is client-server... so p2p is not an alternative architecture to client-server, but just some subclass of client-server.

Maybe p2p means that the server is in your house, instead of in a data centre. Or that technically, it is behind NAT and maybe not Internet exposed. In that case holepunching tech might be useful. This comes up a lot. I would drop the term p2p and just say that sometimes relays are behind NAT and maybe we need a holepunching spec.

Maybe p2p means that every server must also be a client, and vice versa, and then you call them "nodes". This seems to be an unnecessary additional constraint on what people can do. I think the server and client components should always be separable. I can't think of a reason to force them together.

This is the extent of my thinking on p2p. If I missed something about what makes p2p distinct, please mention it so I can integrate it into my thinking.

Reply to this note

Please Login to reply.

Discussion

I think an important part of it is people's expectations. If you present something as being p2p then people are more chill than if you present it as being client server (even if you never use technical terms in your presentation).

The thing is that p2p tech has always been so terrible that even these more chill expectations were far from being met. Playing around with some of this iroh stuff though I'm starting to wonder if the tech hasn't caught up?

https://www.shaga.xyz/ - this is iroh powered and works better than it should for p2p. Something is going on.

Iroh is over QUIC, which has great features but it is UDP, meaning you can't use Tor. So it doesn't provide both privacy and p2p. Also web-based clients probably can't do QUIC and even if they can, they are not going to accept the TLS "raw public key".

No single transport can make everybody happy:

websockets: the only transport that works in browsers. Also works with Tor. But you rely on DNS and CAs and it is the lowest performance choice.

tcp: wouldn't have to rely on DNS and CAs, Tor is supported, and performance is in the middle. But cannot support browser based clients.

quic: wouldn't have to rely on DNS and CAs, and it has the best performance by a long shot. But you can't use Tor or browser-based clients.

Doesn't WebTransport get you CA-less QUIC in browser? Via the server cert hashes thing? Like as long as it matches one of the hashes *you* provide then you're good?

I didn't know that. I didn't consider WebTransport. I'm reading about it now.

I'm already both excited and bummed. A certificate hash isn't as good as a raw public key. With a raw public key I can a-priori know exactly what to expect before ever connecting. With a certificate hash I need to actually have the server certificate first which has a signature I could not predict a-priori. This can be worked around though. I really hate the baggage of certificates (X.509 is a nightmare of ancient crap). But the industry won't let it go.

i think many browsers do support HTTP/3 over QUIC and can do websockets with them, which means that you can support both if you add, probably, some header that requests HTTP/3 if its available if the client supports it and if the relay doesn't understand it, no problem, but if it does, it will upgrade to HTTP/3 web transport over QUICK

And the damn standard didn't bother to define clientCertificateHashes, only serverCertificateHashes.

You could in theory generate a self-signed cert once for a peer/server combo and cross fingers it never changes. Or at least is long-lived. The hash would then come from this cert and be distributed. Then tie the public key to that specific cert structure. Something like that? Dunno. Feels like there's room for jigging. Or use nostr for out-of-band, but that might get circular?

I am doing my own certicates. I am basically doing my own everything. 😛

Mostly because I want to be able to think about how things aught to work rather than how they do. I want my certs to map 1-1 with application scopes. Why? Because I don't want crappy applications anywhere near keys I didn't give them access to. Also you almost never want to use you master Identity key. Every time you unlock it is a chance for compromise. We can't expect grandmother to know good key hygiene

So WebTransport still fails us in a few ways:

* You still lose Tor support. Tor is TCP based.

* You can't connect to a server and verify it by it's public key, you have to have a hash of its certificate somehow

* Client-side certficates still use Web PKI, so can't be used for AUTH

* You layer on a lot of complexity (https://www.w3.org/TR/webtransport/ is not straightforward) with marginal benefits.

I agree with nostr:npub1w4jkwspqn9svwnlrw0nfg0u2yx4cj6yfmp53ya4xp7r24k7gly4qaq30zp about being "connection type agnostic". A message-based protocol can run over any transport, including bluetooth, or paper airplanes.

I think it's worth digging a little.

I'm not big on TOR at all, so if that's a game-changer then fair.

Tying a public key to a long-lived hash may be doable, though you'd need a refresh mechanism for when the browser forces. Again though worth digging.

For Web PKI I read chatter before on some kind of push for secondary authentication, who knows, all very new.

Complexity, no doubt.

But I will say that if the performance you get with iroh holds up then it might be worth every trade-off. For me, p2p with this kind of performance is just nuts. Never in my internet history.

Not allowing browser based clients is a feature 😈

I kid.. sort of. There isn't any reason browsers can't be extended to support quic etc.

I think you kinda want to be connection type agnostic. There are connections you'd prefer (quic) or connections you'd settle for (tcp, Bluetooth)

The nice thing is that you don't have to develop them all at once. You just pick the easiest to implement and make an algorithm that "chooses" ```return ConnectionType::quic``` then add types after it works at all and as demand develops.

i'm not sure there is any practical use for bypassing IP on such things as bluetooth, when most bluetooth devices of any recent vintage are capable of doing it. i mean, it's just a PTP connection like a USB serial bus at the level you usually would use it for this, but PAN is available so why complicate things by using a different flow control protocol (and that's a bit of a nightmare in itself, it's why QUIC still mostly follows what TCP does).

of course, being that it's a point to point connection you can just rely on the hardware based flow control for a duplex connection, or you can implement RTS/CTS in your protocol but ... just why not use TCP/IP when you can

All I mean by peer-to-peer is that there are no special nodes in the graph. I don't really care about the implementation. I only care that things function between 2 isolated users in the same manner that it functions in a fully connected graph.

In some sense that means that every node is a client/server combo but that doesn't mean there are no other connections or that the permissions set on the node can't make it function as a relay, STUN server, or TURN server.

Maybe it is just better to say decentralized but that term might be more overloaded that peer-to-peer.