zerosync.org/zerosync.pdf

Interesting state of play update on this zerosync project.

The most important high-level idea to understand here is: the proofs created here are fixed length, and very cheap to verify, independent of how much history you prove. This mindblowing feat is accomplished (without trusted setup, mind you!) at a cost: the computation required to actually *create* the proof is enormous. It's easy to see why that tradeoff is particularly attractive for initial sync of a bitcoin node - the history is a single, global (very big) thing.

Their current code apparently takes 3.5 hours for 3K transactions (i.e. approx 1 block), so they are very far from breakeven (10 minutes per block). Also there are other big limitations still: they haven't yet done segwit, let alone taproot (so can only look at pre-2017), and they haven't implemented witness validation, so can only do a `assumevalid` type of initial sync. They are relying on utreexo but that itself is not much of a limitation.

As much as that list of limitations might seem comical, this feels to me like the start of a slow paradigm shift.

The final section of the paper talks about a rather speculative but fascinating `zkCoins` protocol that is arguing for a much more powerful version of token exchange in a client-side validation model (they claim it's better than existing RGB/Taro stuff and I'm inclined to believe them though of course it doesn't actually exist yet!).

Minting would need either federatiion, or on-chain snark verifying for proper two-way peg, or burning btc (I'd argue perhaps locking, also could do the trick - time value of money etc).

Reply to this note

Please Login to reply.

Discussion

Doing ZK proofs of coin history is a very obvious addition to RGB/proofmarshal style transactions. I've made that point for years.

Actually doing it of course, is much more challenging!

Re: Bitcoin sync, a serious danger of this stuff is it's a reimplementation, which like any reimplementation, is unlikely to get the protocol correct. This could lead to forks in the future if it's more widely used.

Good point to (re) raise about reimplementation there. I wonder if you'll ever change your mind about written spec? :)

It suddenly made me consider: what if a spec was written and seemed to be compatible with entire history- except, we don't know what is inside all scripthashes, so we can never know if one of them is compatible with Core and not with a future spec.

Definition by test case listing sounds cool except presumably search space is too exponential