p2p.

People want it mainly because they don't want to rely on data centers. But let's walk through an example.

Perhaps you have a node in your home network, and I have a node in my home network. For these to communicate, one of them has to contact the other one first. So let's say my node contacts your node. Ok. Then my node is acting like a client, and your node is acting like a server. So in fact p2p IS actually client-server. It is just a special case of client-server, where the clients and servers are forced to be glued togther into a node.

Therefore IMHO, client-server architectures cover all p2p architectures and more. And gluing the client and server together is a completely unnecessary requirement.

In fact, I want clients on transient machines, laptops that I turn off. But I don't want servers offline, I want them on hardware that stays on. So the p2p node idea falls apart. Clients and servers should be separate.

But of course this isn't all that people are talking about when they talk about p2p. They are talking about hole-punching.

So all I'm trying to argue here is that client-server is not some worse alternative to the superior peer-to-peer, but rather that client-server is the superior architecture.

And it makes more sense to me to suggest that there should be ways to make servers acccessible to the Internet even if they run on home networks, perhaps with hole-punching rendezvous services. Running a server at home is an excellent idea. But nothing else about p2p seems like a good idea to me.

I like p2p for the following reasons

Communication should work for two isolated nodes in a cave or on some planet in Andromeda.

It is the only fully permissionless way to communicate.

That being said. None of it is real. You can't actually connect any two devices without permission unless you are basically close enough to talk.

So when I say peer-to-peer I always mean it very loosely. Or should I say specifically. I mean that if you can get data from node A to Node B then that should be all that the protocol requires. I shouldn't care about how that happens.

I also shouldn't worry about any intermediary being able to sniff or spoof the data. They can deliver it... or not. You can't alter that.

This is partially why I am going with sending metadata only packets. Node A sends some encrypted packet to Node B. Who cares what is in it. That is up to the application to care about.

If you want more data, tell the other node a filehash in your notification packet and then they can ask anyone for it. Who cares where it is cached? It is just a blob of encrypted meaningless data.

Of course since you get a whole packet to state your case for doing whatever actions the application is capable of, no reason not to just stick your 1024 byte tweet in there instead of downloading separately.....

Anyway. I agree that the actual topology that you end up with is client server, but I still think it should work on a local network as well.

Reply to this note

Please Login to reply.

Discussion

Peer-to-peer means not having a 3rd node sitting in the middle of the communication in some data center. E.g. rather than client->server->client it is just peer->peer.

But I've run across people who think a client-server protocol necessarily isn't peer to peer. I'm just trying to argue that the p2p protocols are still client-server protocols. Nostr could be written as a p2p application without changing any of the NIPs, just by making every client also a relay.

Your point is the part I didn't discuss because it wasn't in my head at the time --- the middleman. You want direct communication over anything with no middleman. Fair enough. And I think that should be achievable, even if the communicating pieces are a client and a server.

Yes, that. I would add that the nodes should also have full control of what other nodes they connect to. Your Honey Pot is a good example. You also touched on it via protocols that have dedicated hole-punch IPs. I call this "no special nodes" No node should have a special or even default role in the network that any other node could not fulfill at the behest of client nodes.

Some initialization is needed, but the closer it can be driven by end-point choice the better.

A rule that goes hand in hand is "no special codes" there will, of course, be protocol level identifiers, but things like Blockchains, URLs, global names that require an authority are out. All meaning is what the users say it is.

At least in nostr with the outbox model, there is no routing, and nodes don't control which other nodes they communicate with. I imagined the thing just like browsers connecting to websites. But a completely different architecture could be routed, where a node connects to certain known and trusted neighbors only, and data is routed node-to-node. Such protocols are slower and less reliable, having so many middle-men, but of course if you use Tor under nostr you are pretty much doing that same thing.

I lean towards clients connecting to untrusted nodes. And clients being considered very difficult to develop because they must be security hardened. In the same way that browsers must be. In a similar way that computer hardware has to be bug-free before they tape out. Making nostr "simple" proliferates insecure half-assed software. I lean towards complexity as a way of weeding out developers who can't be trusted to make hardened software.

But this is just a current leaning. I'm interested in exploring architectures where clients don't have to talk to nodes they don't trust... I just don't see the big picture right now of how this could work without so many downsides.

I don't think it has to be super complicated.. at least you can aim for exactly what you need and nothing you don't to try to keep implementation "easy" and bugs low.

For instance I wouldn't send a timestamp. It isn't that you don't want them, it is that you can't trust them. Things are just going to break if you require timestamps and then try to do clever things with them. No one will use them consistently or correctly.

I do agree that Nostr's simplicity makes everything else you want hard. If you want to encrypt properly.. good luck. The best you can do is TLS and just trust your relays.

Back to p2p, yes it is slow and buggy, but we can have the best of both worlds by doing web-of-trust stuff in a peer-to-peer like fashion but actually disseminate data in a traditional hub and spoke architecture.

I wouldn't make it wide open. I think you want authentication as part of the protocol for automatic spam suppression, but you can still have big servers serving stuff.

I am tempted to make requesting files non-authenticated. If they are just identified by their hash, good luck guessing a 32byte string. The counter argument is that if you know a commonly shared file "never gonna give you up" then you could find out who is interested in it by seeing whose relays have it cached.

Authentication should be fairly straightforward however, a server just accepts connections from a list of keys (not the user's master, just one they use for talking to that server) If I am using a particular relay as a mailbox, I can give it a list of my friends keys to filter on.

So every request is something like

I complect things further by having those first two keys Scoped to specific application permissions so the final destination node knows how to unpack the Payload.

Key management needs to be perfect. I don't want a "Words with Friends" app to be able to read bank transactions. With properly scoped keys you could leak your whole keystore database to a malicious application developer and still not be compromised. (Assuming encrypted at rest)

All that, I get off on such tangents, to say, while the key requirements etc may be complicated the actual protocol does not need to be.

Speaking of software and security, does Gossip AppImage look for new updates or would that be a security issue?

It doesn't look for new updates. I'm not against it though.