I think there are ways to make validating a block very slow due to script validation, but that is orthogonal to utxo cardinality.

Both blocks and utxos are stored in leveldb, and blocks are an order of magnitude bigger on disk with no performance issues. However, value sizes for utxos are much smaller, so the number of entries is bigger.

The next release of Bitcoin Core has a change to greatly speed up disk lookups of utxos as well.

Reply to this note

Please Login to reply.

Discussion

Oh for sure script validation is a more "real and present danger". I'm discussing more theoretically; the "state" is the utxo set, and theoretically you need all of it to do validation. Blocks, you don't. I think cardinality is relevant, though I'm not sure in detail, based on the size of a utxo serialization being roughly constant.

Lookups in a set aren't free, so a limit must exist somewhere, right?

Yes, I'm trying to find out what that limit would be in leveldb without much luck.

I see some GitHub issue commenters saying they operate leveldb DBs with multiple Tabs and 100s of billions of entries with no issues.

I haven't done the math yet but I think it would take decades of constant utxo spam at a 4 MB per 10 minute rate to get there.

*multiple TBs

Nice, good to know there is no trivial limit there, just from db operations. Presumably we would hit other limits. I guess this is a case where simulating on a testnet might be the way to find practical limits. Not a trivial project though!