Oh for sure script validation is a more "real and present danger". I'm discussing more theoretically; the "state" is the utxo set, and theoretically you need all of it to do validation. Blocks, you don't. I think cardinality is relevant, though I'm not sure in detail, based on the size of a utxo serialization being roughly constant.
Lookups in a set aren't free, so a limit must exist somewhere, right?
Yes, I'm trying to find out what that limit would be in leveldb without much luck.
Thread collapsed
I see some GitHub issue commenters saying they operate leveldb DBs with multiple Tabs and 100s of billions of entries with no issues.
I haven't done the math yet but I think it would take decades of constant utxo spam at a 4 MB per 10 minute rate to get there.
Nice, good to know there is no trivial limit there, just from db operations. Presumably we would hit other limits. I guess this is a case where simulating on a testnet might be the way to find practical limits. Not a trivial project though!
Thread collapsed
Thread collapsed
Thread collapsed