Today I am going to talk about the Trilemma, how it is asserted and presented and it's actual tangible impact on designing permissionless, censorship resistant distributed consensus systems.

The trilemma is as follows: there is a theoretical limit to the total security, decentralization and scalability you can get in any permissionless consensus system. Think of it like when you're setting up a character in a video game: you have a set number of points to give to each available trait, and the more of one trait you give, the less of another you can give.

In the trilemma, you can "max out" 2 of the 3 traits. If you have maximum security and maximum decentralization, you have no (or very little) scalability, if you max out decentralization and scalability you get very little if any security, and if you max out security and scalability you get very little decentralization.

This is at the crux of the BTC block size debate; the prevailing argument is that increasing the block size increases scalability but reduces decentralization. Nodes can't hold the whole blockchain as easily and so the number of nodes drops, the only nodes that continue to exist are those privileged enough to be able to afford the ever increasing storage requirements. This concept does hold water, although I think that decentralization has a lot of other factors and that the blockchain footprint isn't even the most important one, but that's another discussion.

The trilemma is widely asserted and hands are thrown up and we all decide there's nothing to be done about it. Either it's a steadfast law of the universe, or we aren't smart enough and some day, some other smart guy will come in and solve it but for now, we are stuck with it, "they" are working on it somewhere. What we don't talk about is whether we are even at the limits of the trilemma, assuming it is a steadfast rule, or whether "they" have already worked on it and improved it and whether something exists right now that, if implemented, improves the status quo.

There are 3 sets of limits on how optimal you can get a system before you hit the wall of the trilemma: the theoretical limits of what can be done in reality, the practical limit imposed by external factors that are not subject to the rules of the system, and the limits imposed by the architecture of the system we are building. It is entirely possible, and even very likely, that the architectures of the systems we build to which the trilemma applies aren't optimized such that we max out what we can get before actually hitting the limits imposed by the trilemma.

Here's a thought experiment: suppose you have infinite security and infinite decentralization. You could get 0 scalability in that scenario. The network would be unusable. So, practically speaking, you'd have no security or decentralization because you have a non working system. If you had one less than infinity of each, you'd still have 0 scalability. So what's the limit, the point at which you have 1 transaction or user? These limits are the practical limits, that is, you cannot get infinite decentralization, you can only get, at most, a number of independent nodes equal to the number of people that exist, and if younger that you'd have near 0 scalability; nobody would be able to use it. Likewise, you cannot get infinite security, you can only get security equal to the amount of energy that can be expended to guarantee it, and so on. The true line where you get any scalability at all are somewhere below that. Now do this thought experiment with infinite security and scalability and then infinite decentralization and scalability and you'll understand what the practical limit means fundamentally: there are facts about the state of the world right now that impose limits even if our architecture were perfect and in line with the theoretical limits that we could potentially reach. Part of what this means is that we don't need infinite scalability, we only need to resolve the trilemma up to a point where engaging on the network is near frictionless for the people that want to use it, that our practical limitations may say something about our practical requirements.

(It is important to note that with PoW, we are always at the security frontier imposed by the practical limits, because the energy put into securing the network is always the maximum allowable, assuming the cryptography and technology used to do this are sound. I'd also like to point out the fact that technology available to us is also one of these practical limits, but that gets messy because often times technology exists to push these limits that isn't widely used for one reason or another.)

Currently, in Bitcoin, we keep every single transaction in it's entirety forever. This reduces decentralization or scalability, depending on which limit you place on the block size. This is clearly not optimized; you don't fundamentally need to remember every transaction ever engaged in forever to guarantee the security of the network. You do in Bitcoin because of it's architecture, and pruned node schemes that do things like only keep the UTXO set and block headers reduce the security of the network if there aren't enough full nodes. But fundamentally this is not a requirement. This limit is imposed by the architecture of the system. If you could remove this need entirely and only keep the UTXO set, you'd still have the trilemma (I will explain this later), but you'd significantly increase how much decentralization and scalability you can get for the same security guarantees.

So bigger blocks... This doesn't resolve the problem, the architecturally imposed limit still exists. It exists for Monero as well, even with a dynamic block size, because you still have to keep every transaction that ever happened, even though you can't use them for anything at all unless you engaged in them and they're basically random noise, you still have to keep them to get the security guarantee. I would say it's worthwhile, just as many tradeoffs are worthwhile, because of the additional security features you get from it, but as a matter of fact, it is a stopgap, a practical limit, if we could get the same privacy and security guarantees without archiving the entire obfuscated transaction history we would, the Monero community is basically unanimously in agreement on this point.

So, is there an architecture that pushes us to the practical limit and enables us to get more decentralization and/or scalability with the same security guarantees? It turns out there is! It sacrifices some of Monero's security guarantees, given that privacy is a part of the security model, and it sacrifices programmability entirely, but it enables us to keep only the UTXO set and nothing more while still getting the security guarantees that Bitcoin gives us subject to the practical limits.

Mimblewimble, as specified by Tom Elvis Jedusor, is an interactive protocol that, due to the signature scheme used, makes every block into one big coinjoin transaction, and makes the entire blockchain into one periodically updating block. All spent transactions can be safely removed, known as cut through, and the only thing you have to keep are the UTXOs. There is no sacrifice in security as a result of this architecture as opposed to Bitcoin (there is as opposed to Monero, because there's no way to enforce cut through on every node as a requirement and so a transaction graph can be created, which is why Monero doesn't adopt this scheme). In fact, you get a few more security guarantees, amounts are obfuscated, recipients are obfuscated and because of the interactivity, you cannot send transactions to nonexistent addresses, the only way coins can be lost forever is by losing the private key to them, and it is much more difficult to send coins to a scammer because the party you're interacting with has to send you back the signed transaction to receive the coins.

(It is important to note that this is not the Mimblewimble specification employed today, which is slightly different, I'll get into that in a minute.)

In the original MW scheme, you don't even need a block size at all. The blockchain scales with the number of UTXOs only, you can put as many transactions as you want into a block and the spent outputs get cut through, the blockchain doesn't grow by the block size every block (assuming maximum utilization), but only by the number of new UTXOs, that is, coinbase transactions, and sending one UTXO to two or more people. You could have a 10MB block, and if 90% of the UTXOs created are spending existing ones, the entire chain only grows by 1MB. When multiple UTXOs are swept into one, the size of the chain actually decreases.

So the trilemma is solved! Well, not really, I said I'd get to this later and here it is: the blockchain will continue to grow in perpetuity, forever, it just grows at a much, much slower rate. Specifically, keys get lost, coinbase transactions get created, and so, given infinite time it will grow to infinity, but that growth is sub linear with time, significantly less than Bitcoin. But, optimization is at the practical limit; decentralization is inversely proportional to scalability given the same security guarantees, and it approaches the theoretical limit, the size of any given block, and hence it's scalability, is restricted only by the block time and latency of the network itself. Over vast distances, like between planets, you can hit this theoretical limit, but here on earth the real limit is the practical limit of network bandwidth, that is, it is practically limitless. With a combination of PoW and the original Mimblewimble, you have optimized the trilemma such that, while there is still a theoretical limit, block size is no longer a factor and the practical limits on scalability become bandwidth and verification time.

I said earlier that this version of MW isn't the one widely deployed today. The original MW has a limitation: you can only send and receive transactions. You cannot have any programmability at all. No time locks, no multisig, nothing. It's just money, cash. So there's no way to trustlessly enforce conditions on the movement of money on the network, and that's a significant enough limitation that a compromise is called for. This compromise came in the form of Andrew Poelstra's improved Mimblewimble specification, as implemented in Grin, that added transaction kernels. This meant that a small piece of data from every transaction must be kept forever to get the same security guarantees that Bitcoin provides, but with it you get some pretty amazing things. Blockchain growth is still sub linear with time, but a little faster than without the kernels.

(I will digress a little bit here, forgive me. With MW, you get the usual programmability that bitcoin gives you, but in a way that no outside observer can see that it is even happening. You also get the ability to build a smart contract consensus system that has finality on chain but is entirely private between participants, where the entire network doesn't have to verify every transaction within the system at all, as opposed to existing smart contract systems. You inherit the ability to throw away historical state of these systems and only keep the current state of them. For some reason not a lot of work is going into building these systems, but it makes all the work going into things like Ethereum and rollups and attempts at fully private smart contract systems based on Monero futile. I genuinely don't understand why nobody is working on something like this, it seems like a very impactful endeavor.)

One of the limitations of this improved MW is that you still need a block size. It can be significantly bigger than Bitcoin, and can be dynamic, but in order to ensure that blocks get finalized within the block time you have to impose some limit, and that limit can scale as needed and doesn't impact the growth rate of the blockchain besides the accumulation of transaction kernels. It is not a practical factor in the decentralization vs scaling debate. What this means fundamentally is that, when discussing block size with relation to this trade off, we aren't discussing the trilemma at all, we are discussing architectural limitations of Bitcoin and existing smart contract focused chains. The trilemma is not the limiting factor in this trade off, the architecture of the systems we use is the limiting factor. We can, right now, get the same security guarantees, the same or improved decentralization, and significantly improved scalability, with simply a different architecture. The trilemma is no longer a factor when we talk about scalability, our current choice of architecture is. Other considerations limit our ability to scale, not decentralization due to blockchain size.

I believe the trilemma exists, and is fundamental and unavoidable, but I don't believe that the most prominent systems we have today even come close to hitting it's limits. I believe that it is possible to get programmability and perfect privacy without storing any historical state at all, though no such architecture exists today. I believe that there are worthwhile trade offs and considerations other than the trilemma, such as privacy and programmability, that justify not using existing ideas and techniques that push the limits while not satisfying those considerations, as well as justify compromises to preserve those important traits. I also believe that with better network architecture we can approach the theoretical limit on latency and really begin to hit the limits the trilemma sets for us. But for now, invoking the trilemma as a way to say "can't be done" when we are talking about architectures that impose unnecessary limits on decentralization and scalability that can be overcome with things that exist right now is counterproductive.

Reply to this note

Please Login to reply.

Discussion

MimbleWimble is definitely underrated for scalability + privacy