My issue is on the verification side. These are ZK proofs, create them any way you like -- but they have to be verified by random people. Or what are you doing?
One reason Kaspa team are going the ZK route is because they cannot do on-chain computation (non turing complete), so they are outsourcing to the end-user's CPU or GPU, or to dedicated proving servers. Which is fine, ZK is useful. But then random people have to verify the computation that those CUPs and GPUs did.
So verification matters, and the whole liquidity silos things doesn't touch on verification at all, that's a whole other thing.
Let's say that the proof is that a passport showing age over 18 was seen at such and such a time by such and such a website. Creating the proof is and validating it on chain is one thing. Trustless verification by random people that need to check this fact is another thing. Kaspa best you can do is old-fashioned call an RPC. Not trustless. To truly verify the state themselves, they would need to run a full Kaspa node, which is, like, big. A full Kaspa node needs to maintain the entire state of all vprogs it cares about to run and re-verify the logic locally, or at least be able to access the witness data and state commitments.
Whereas with proofs on a ZK-native layer 1 anyone can do trustless self-verification on a cheap android phone from10 years ago. This is why the ZK-native layer 1s will win out in the end.
Why do you need to access the witness data if it’s cryptographically verified? That can’t change.
that comes after an "or". it's either or. but a single kaspa node would in any case have to maintain the state commitments of every signle vprog across the whole thing going back to the pruning point, whatever that is for kaspa. otherwise makes no sense. if the pruning point is only a few days back and that's out the window then you need an archive node, that's crazy heavy for an auditor that just needs double check in a trusted way (i.e no RPC call) that someone who visited a website last week was over 18.
so it fails on that end.
You don’t need archival nodes to verify what’s cryptographically verifiable. Nobody looks at the history and visually verifies UTXOs on a scale of any magnitude.
Ok so let's say you're a compliance auditor. You've been sent by the team a merkle path n' leaf set relating to a website login for a user who supposedly passed an age check, and this includes some user details. You need to run that against Kaspa to see if it checks out, and an RPC call won't do. This suspicious login is from one week ago.
Explain to me how, without an archive node, you do this.
Why is the onus on Kaspa to regulate an age check? This issue is way before Kaspa. Kaspa is only settling what’s been determined/or neglected prior. That’s like saying the internet is responsible for a 15 year old getting on a porn cute.
I don't think you're understanding what ZKproofs are. Kaspa is not regulating anything. It's just doing math.
The ZK proof in such an example would be a mathematical proof that the government-private-key-signed data from the chip inside the passport was scanned by the device (which has its own cryptographic record relating to the nfc scan and loading into memory and so on). All it is is math. The significance of the math is what humans agree on, but once that's agreed on you're gonna need to run occasional checks to make sure the math checks out (and thus that the significance of it checks out).
You can extrapolate that to anything a kaspa vprog would do. It's all the same thing, proofs of whatever math underscores whatever thing of interest (often a transaction but not always).
But end of the day, Kaspa is just not designed to do this kind of verification efficiently and trustlessly. This is all an afterthought for Kaspa. So it stands to reason it's not going to be super efficient at it.
Why would the math break? Why the occasional checks?
Because for ZK the data of "what happened" is typically off chain. It's not like an old-fashioned smart contract at all, where all of that data is on chain. So the parties will share that data with each other privately. And then the receiving party can use what is on chain (the merkle root) to determine the "mathematical truthfulness" of what they've been passed. Those are the checks.
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed