Right, i see. I guess what I'm saying in the OP applies mostly to them.
Indeed, important to mention OP_RETURN, but iirc Stamps followed the example of Counterparty and did not use it, specifically to avoid being pruned (though the history of this is complex and controversial).
So apparently utxo set size has now already hit 160M. This is a huge blowup, presumably due to "stamps" and similar projects. I wonder if there is a way in which we can finesse by having a flavor of bitcoin node which is "optimistic", stores (in fast access let's say) only utxos above a certain value (somewhere above dust), and has a kind of slower recovery mode to deal with spends of utxos below that filter. I don't know if that makes any sense, especially outside of "blocksonly", and after all we can't have all nodes be blocks only! (or can we? err I just remembered that someone around here may disagree :) ).
Yeah sorry, the direct link to the pdf doesn't work for some reason. Here:
Funny how this works. Couple years back, I cheekily gave my audience a 'homework question': https://reyify.com/blog/homework-answer-advancing-2022
Just this week, I realised that it's a solution to a problem that's been bothering me for a while: cool, I can use Curve Trees to get an efficient ZK proof of knowledge of 1 out of N secp256k1 pubkeys (e.g. taproot utxos), but isn't that a bit useless to limit usage, if a bad actor can just keep generating proofs indefinitely, using the same pubkey, but we don't know because we wanted the ZK property for user anonymity?
Because curve trees use a 'rerandomised' pubkey, we can use the above 'homework' proof (proof that you know the opening of a Pedersen commitment), plus a DLEQ, to create a verifiable key image, that can only be used once, for a specific curve tree (which is an accumulator that everyone can reconstruct, using the blockchain - e.g. 100k taproot pubkeys).
This can give 'decentralized' anonymous usage tokens (i.e. without an issuer), with very large anon sets without bad prover or verifier computation blowup.
More detail: https://github.com/AdamISZ/aut-ct.pdf
I deleted twitter 6 years ago and I'm still skeptical of this architecture working at scale.
I tend to think monetization of relays will be helpful, and while that is working well for me as a user atm, it remains to be seen if enough people will be willing to pay, or whether the tendency towards centralized entities and hidden costs will just keep reasserting itself.
Oh interesting thought, I see .. i haven't spent any time looking into utreexo but I wouldn't be surprised if you're on to something there.
Good question, not really; last year I got a bit stuck when i realised there was no implementation of secq, but now there is. I think curve trees *is* going to be concretely more efficient for this specific use case, but i was really impressed to see you can use a generic ZKP system to do the sane thing, without any trusted setup. It could have broader implications, maybe.
(see my earlier 11 Jan post on spartan-ecdsa if you are both interested and confused by that). I might write up a gist with some advice on how to work with that in case anyone else wants to run with this stuff).
A difficulty is that you really need to build big (full, not sparse) merkle trees efficiently. If/when you do, the proving time is not too bad - 8 seconds in this test of 48K keys. The number of constraints in the circuit varies ~ logarithmically with the size of the keyset (because the merkle proof does), and so the proving time should do roughly the same (the prover is O(n) in the size of the statement). In my testing I don't actually see a difference, i.e. i still get 8 seconds up to 100k keys and above, I think this is a detail of how spartan/sparc works but at least it's safe to say, it won't be *worse* than logarithmic.
It remains to be seen whether linkability can be added here (i think, yes: include a DLEQ in the circuit, as in my earlier post, since they use secq, you can define another generator H, then take the poseidon hash of the new P2 as a token a la podle). Probably not a huge number of extra constraints, but the hashing is not clear...
Update: I can prove ownership of one of the taproot txos from block 85000 to block 155000 on signet - of which there are approx 48000 - with a 16kB proof, and it can be verified in about 1.5s on my laptop (have reason to believe this could go below 0.5s). That's cool, but what's really cool is if I go from 48000 to 1M+, the verification time barely changes at all - perhaps 0.1s longer, something like that.
Also, because we are including the ECDSA signature in the circuit, we can easily add other (simple) conditions by just writing more constraints. (Why ECDSA if taproot? because bip340 is pubkey-prefixed, so you would need SHA2 in the circuit, which is probably impractical. If you just want to prove ownership, ECDSA works fine and does not have this requirement, as long as you don't accept random message hashes - that is not sound.).
Whether the cost of using the network is higher or lower using covenants will just depend how you use them. In most sane use cases, it will be lower or equal; the fact that a rule inside a protocol can be enforced without a punishment mechanism will at least tend to make fallback cases less expensive.
But that's somehow beside the point. The point is more that this additional ability to constrain behaviour makes whole classes of protocols that allow lifting transactions off chain possible, that were previously impossible, realistically. Think 'joinpools', think multiparty channels, think congestion control trees, even vaults (ones that don't suck, anyway!). That's the main reason I advocate for this - this kind of contract expressivity is imo necessary for the *payments* use case of bitcoin.
Not sure I understood the rest of your questions. L2 participation is obviously not the same as L1 participation. The latter is impossible for everyone, obviously. Or indeed for *anyone* most of the time. That's why current Lightning, as good as it is, is just scratching the surface of what we need.
TLDR: proving ownership of one out of 1 million utxos, means 1 million keys, with a proof size that is reasonably compact, and a verification time that is reasonably quick (hence we talk about "sublinear", i.e. less than O(n)).
Not TLDR: you can see work I did previously to find schemes that are at least close to this here: https://reyify.com/blog/little-riddle , and the referenced earlier blog posts. These aren't, probably, fast enough in verification. Then there's also the idea of using Curve Trees which I discussed in a post here on nostr in some detail, recently. This is another approach (at least, I *think* it is), but now I think about it, we *might* still get stuck with linear verif time here too.
As I keep studying this, I'm finding a lot of interesting things. In outline, you can prove that you own a public key by providing a signature on an agreed message, and that the key is one of a set of ~ 100 million (merkle tree depth 26 say, though it matters little if you vary between 20-30); you hash the pubkey using Poseidon, which is particularly friendly to zkps because it only requires a few hundred constraints in the circuit. Then for the signature, you use ECDSA - a fascinating detail here (I'm not sure if anyone considered this), is that because BIP340Schnorr uses key-prefixing you can't make the message hash ("e") be a *public* input to the circuit (that SHA2 hash would have to be calculated in zero knowledge, which is a lot more expensive than Poseidon), whereas with ECDSA, by a trivial algebraic manipulation, you can convert the verification to a form like a*B + C = D, where the nonce-point R and the msghash are public inputs. It's still a significant expense, because you're doing (only 1) curve addition and multiplication, but with the use of secq as the base field, and with some fairly basic optimizations, it ends up being reasonable. For this case, I am getting about a 16kB size proof and about 1.5 seconds of verification time *using javascript/wasm, not rust or C*, and, on a laptop. Considering that is for a proof of, easily, the anon set of the *entire utxo set of bitcoin*, then at least in theory, this is a very impressive achievement.
For anyone interested in diving deep into zero knowledge proof systems, this recent book by Thaler appears to be *very* thorough and comprehensive:
https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.pdf
Wish there was something as good as this last decade!
Recently making travel arrangements, I still find cheapair.com to be good for btc payments for flights and hotels. I also found travala.com this time, and it also worked fine for one of my bookings. A tentative recommendation; only tried it once. Anyone else has good recommendations for travel sites that accept BTC?
OK that, yes. You're really saying that a larger anonset in one coinjoin is better despite being less objectively calculable. I agree (though, the counterargument isn't 100% wrong).
I think the point Max makes here is pretty solid (albeit he's selling his product!), but I find your point much less clear. What's the relationship between anonymity set being clear, and inefficiency?
It's one thing to not consider the anon set easily calculable, and still think the system is viable as an anonymizing one, but if anything it's dangerous to claim it's better if you can't quantify it.
Probably you mean something quite different.
15 years since first non-coinbase #Bitcoin tx (Satoshi to Hal). https://mempool.space/tx/f4184fc596403b9d638783cf57adfe4c75c605f6356fbc91338530e9831e9e16
Can't believe Satoshi filled the blockchain with uncompressed pubkey spam!
Good read:
https://balajis.com/p/crypto-twitter-found-sbfs-fraud
Since then, they decided to drop the charges against him for campaign financing fraud and bribing CCP apparatchiks, because *checks notes*, no sorry, there aren't any notes, they couldn't even be bothered to make up excuses. How surprising.
Doh, ignore the reference to groth16, i forgot that that requires pairings.
TLDR: proving ownership of one out of 1 million utxos, means 1 million keys, with a proof size that is reasonably compact, and a verification time that is reasonably quick (hence we talk about "sublinear", i.e. less than O(n)).
Not TLDR: you can see work I did previously to find schemes that are at least close to this here: https://reyify.com/blog/little-riddle , and the referenced earlier blog posts. These aren't, probably, fast enough in verification. Then there's also the idea of using Curve Trees which I discussed in a post here on nostr in some detail, recently. This is another approach (at least, I *think* it is), but now I think about it, we *might* still get stuck with linear verif time here too.
And your question about motivation: I think anti-sybil is going to prove to be a crucial property for these anonymous systems using bitcoin (most importantly, Lightning: see, jamming attacks). I discussed this a fair bit in this talk: https://pretalx.com/adopting-bitcoin-2022/talk/RVFS9C/