You don’t understand, clearly. The security budget for Bitcoin is shrinking. If Bitcoin doesn’t scale on the layer 1 (significant increase in transactions) or spot price double every 4 years, miners don’t get paid. This isn’t my opinion. Go do the math.

Reply to this note

Please Login to reply.

Discussion

>The security budget for Bitcoin is shrinking. If Bitcoin doesn’t scale on the layer 1 (significant increase in transactions) or spot price double every 4 years, miners don’t get paid.

This has been eloquently debunked numerous times and is squarely in the realm of fringe/scarecrow bitcoin arguments.

What you're effectively saying is "Hey there, looking to make everyday purchases with crypto? Use kaspa, it’s more future-secure than anything running off of the bitcoin chain, such as lightning".

Which, I mean come on, just listen to that.

You typed all that to claim it’s been debunked, but didn’t do the math.

No, that’s not at all what I’m saying. Bitcoin is a superior store of value and always will be. But when the mantra ā€œNever Sell Your Bitcoinā€ is chanted, transaction fees won’t sustain the network as block rewards fall. It’s simple math.

You cannot have a superior store of value without a secure chain. Which is it? A superior-store-of-value chain, or a chain in imminent crisis? Pick one.

And no, it's the farthest thing from "simple math". It’s all dynamic variables. You've got shifting motivations, shifting rewards, shifting costs, shifting mechanisms, shifting code, shifting hardware. This kind of "simple math" argument is ultimately nonsensical because it assumes every variable apart from the 2 or 3 in the "simple math" equation is static. Which is never true. (And usually the ones in the equation itself are not even static either.)

It’s not an absolute. It’s a growing process. If there is not an increase in transaction fees as block rewards shrink, the chain becomes less secure. Period. The base layer has to be used. It has to be used a certain amount. You painting juxtapositions doesn’t change that. It’s simple math.

The motto ā€œnever sell bitcoinā€ doesn’t mean ā€œdon’t use bitcoin in your daily lifeā€ but rather ā€œuse only bitcoin for your daily purchasesā€ šŸ˜‰ and the lightning network provides this nowadays.

To be fair most lightning network "everyday purchases" are performative, and the overall volume of those is teeny tiny. Also the lightning network will soon become a transport layer for USDT predominately, with transactions in USDT on the lightning network (native Taproot, not ERC20) far surpassing transactions in Bitcoin.

Firth decreasing the use of the base layer. My point exactly.

It's actually the opposite. And it's one of the many variables that your one-sentence analysis doesn't factor in.

USDT volume could 1000x the use of the lightning network, and that means 1000x more on-chain transactions for opening channels, closing channels, balancing, etc. (Actually more as the balancing needs are higher.)

USDT volume on tron is $14.9 billion per day. Per DAY! For bitcoin volume on lightning it's in the tens of millions. So a fraction of a single percent.

Right now there are many people in Asia just waiting for the USDT integration with Taproot to go live. If only a few percent of the volume moves from tron to the lightning network that's a massive increase in on-chain transactions related to the lightning network.

It's a very ironic state of affairs, but it is what it is.

You have any idea how long it would take to open that many channels for that kind of volume? Decades to onboard that many to Lightning. And then to not know transaction fees on Lightning Network. You need permission to claim your Lightning Sats as Bitcoin unless opening your own channel. This is the opposite of why Bitcoin was created. Go ask your friends who puts stupid amounts of liquidity in a Lightning Channel? They don’t fit obvious reasons. Not to mention channels can be closed maliciously. It’s very flawed.

The bulk of this is simply not correct, and the rest is misleading. Taproot USDT volume on the lightning network will be primarily repeat cross-border transactions between parties that know each other, which is how it is on tron. These parties will have directly connected nodes, and therefore zero routing fees. However these transacting pairs will be regularly popping in and out of existence, with on-chain transactions each time.

Then you can ask why is lightning attractive to USDT people already using tron, and for the answer to that you again have to understand the world of tether and what the motivations are in that world and the drawbacks of tron. but sufficed to say it's quite attractive.

The part about this not being in the original sermon for bitcoin, sure that's true, but again, it is what it is.

The bulk of what you say is simply not correct. See how good of an argument that is? Haha

Yeah. Repeat cross-border transactions is banking. You’re officially advocating for the problem. Well done.

I’m well aware of the attractions, and they point to USDT on a fair-launched, POW with 10bps layer 1, Not Lightning.

I don't see the concept of "banking" as a problem at all.

USDT on kaspa? that might be something of interest several years down the line if kaspa takes off and tether decides it meets their high bar. But to get to that stage kaspa has to solve another problem. I'm not sure what problem that is and for who? You really have to use your imagination to come to a real-world problem that kaspa solves better than anything else out there and (critically) quickly after launch.

Reminds me of handshake when it launched, "look at this, a fair-launched POW chain for DNS!" and look where that is now.

Haha yeah who needs self sovereignty. Just use a bank! That’s why we’re all in Bitcoin! We want better banking! *sarcasm*

3rd party developers are deploying smart contracts on Kaspa tomorrow. USDC already confirmed independent bridging with Circle confirming adoption if all goes well. It will be within the next year. You need to do some homework.

You have no idea what you’re talking about with Kaspa, clearly. What problem are you claiming that you don’t know what it is?

Sounds like "who needs USDT, just use USD". I remember that. There's a pretty big difference to both dollars and banking when you switch worlds.

Independent bridging with USDC if all goes well, okay, but we can say that about a great many other chains.

What real-world problem is it solving that isn't already better solved some existing way? Paying for a coffee? No. Online micro-transactions? No. So what?

Alas...

Haha unbelievable. No, there isn’t. Banking is banking is banking. Self sovereignty is Self sovereignty is Self sovereignty.

ā€œOther chainsā€??? Give me a break man. Are we comparing POW ā€œchainsā€ (Kaspa is a DAG-again, go do your homework) to just any other chain? Haha. Idk why I’m still engaging if you think a POW that can scale, that’s fair launched, with a fixed supply, housing stable coins is the same as ā€œother chainsā€ (ETH, TRON, SOL, etc). I’m saddened honestly. I expected more from first principles rather than rhetoric.

The real world problem is the sovereign debt crisis. The genius act. An EXPLODING stable coin market globally. You want that on a decentralized ledger or a centralized one? You want high fees or low fees? Give me a break man.

Two (or three, or four) companies doing their own intra-group banking on the lighting network is self-sovereign as far as that group of companies goes. If you're one of those 'crypto is only for the plebs' people then, alright, all power to you, ”Viva la revolución!

I went and looked up this USDC on Kaspa news and what learned was Kaspa cannot do smart contracts at all, it's all Layer 2. Kaspa cannot ever support USDC natively, or any stablecoin for that matter.

And here you are taking shots at the lightning network for not being on-chain.

This led me down a rabbit hole which took me to Quai, which looks a lot more promising than Kaspa, at least at first glance.

This is not true. Kasplex (a third party entity and dev group) developed and built out the current L2 on Kaspa.

There are several other dev groups that are working on L2s and SC. It is an open source protocol.

For you to look at 1 project, the first of its kind, and then determine that ā€œKaspaā€ can’t do smart contracts is nonsensical. Development takes time. If you want to study self-executing, native SC, look at Sparkle.

Come on, it's not Turing complete, it lacks smart contract functionality on its base layer, it's a UTXO model with a limited scripting and that's baked in to the core of the core. There is no math that allows a chain like that to get "upgraded" to Turing complete.

Not that layers 2 are a bad thing, but when we're talking smart contracts on Kaspa we're talking the equivalent of Rootstock and whatnot.

Everything lacks something until is doesn’t. And it doesn’t need to be Turing complete in order to operate trustless smart contracts on the base layer.

Smart contracts as we know them need a Turing complete chain. Yes with a UTXO model like Kaspa you could do P2PK, or multi-sig this-n-that, which, technically yes, those are smart contracts, but that's just wordplay.

For real smart contracts on Kaspa you need the Layer 2, there's no getting around it.

Quai is Turing complete and fully supports smart contracts on the base layer. If the goal is a very fast proof-of-work chain with smart contracts directly on the base layer then Quai offers this now, today. No matter how good Kaspa's layer 2 is, it's all still much messier.

This is patently false. A smart contract is any code enforced on-chain that manages state transitions automatically based on rules.

Turing completeness is not required for that. Your Quai comparison isn’t apples-to-apples. Quai’s determinism is risky at best and is not wholly POW.

We all know what is meant when we hear someone use the term "smart contract". There is an implied level of complexity. Saying that Kaspa can do smart contracts on the base layer is like winning a court case on some arcane technicality.

Give me an example of the most complex smart contract that Kaspa could ever process on the base layer and I guarantee you it'll be something so simple that most people would be surprised to learn it's technically a smart contract, like some M of N multi-sig or what have you.

You can nitpick on Quai, but there's no arguing that a smart contract on its base layer is a *real* smart contract.

Just like everybody guaranteed 7 years ago POW couldn’t scale on the base layer, huh?

POW scaling on the base layer is not breaking math, it's fine tuning. Performing Turing complete operations on a non Turing complete chain is plain old breaking math.

I mean the whole premise of something like Kaspa is "our layer one is so good we don't need a layer two". But then it turns out you do in fact need a layer two for these key use cases, which kind of undoes the whole marketing.

Anyway Quai is right there, it has smart contracts directly on the L1, it's very fast, it's proof of work, why would anyone choose Kaspa L2 over Quai L1?

You think a POW, with a revolutionary directed cyclic graph structuring, with a block speed of 10bps’ whole marketing is base layer smart contracts? Yikes.

To answer your question, Quai isn’t POW consensus. You need to do more homework.

Also, there’s also no hard cap on Quai tokens. I don’t like long standing inflation, no thanks.

Read about Sparkle for Kaspa. You don’t know that a L2 is necessary. They are still developing it.

Are you trying to say there's a way to do Turing operations in a non-Turing-complete environment? Cause that's what it sounds like.

There is not. Quick look at this sparkle show's all rollups and custom zk-opcodes. So Layer 2 stuff. Sparkle has nothing to do with making the base layer magically Turing complete.

Kinda like the RGB folks saying "Now we have smart contracts on Bitcoin". No, no you do not. You have a ZK layer 2, just like all the other ZK layer 2s on all the other chains.

Right but the whole point of Kaspa is that it's a really fast PoW layer 1. That's the entire raison d'etre.

Sure it (and many other layer 1s) can function as a ZK settlement layer, but the point of Kaspa never was for it to be a ZK settlement layer. That's a massive pivot. I mean if all the activity moves to zk sqeuencers off chain the what was the point of making it easy for activity to happen on chain?

Either Kaspa sticks to its original mission of being an "all you need" layer 1, forget smart contracts altogether, or it it becomes yet another fish in the zk settlement layer pond.

Go watch what Sutton said Vprogs are. No liquidity silos. He did an interview with XXIM. Sompolinsky is saying the same thing.

I get the gist, our team does zkapps (o1js, noir, cairo) and there's not a lot new under the zksun.

At the end of the day it's logic executed off-chain by a dedicated prover. It always is. Whether you use a sequencer and call it an L2 or use a so-called vprog and call it an extension to the L1, it's still some outside CPU taking a long time to prove something and then yeeting that proof on back.

Honestly, for ZK, my view is that you need a ZK stack top to bottom. Mina was too early but that is the right path, the entire Mina chain reduces to a 22kb recursive snark, you can verify anything proven in the entire history of the chain on a iPhone in100 milliseconds. You just need that 22kb snark and whatever zkapp state proof you got sent to you and that's it. So a super-fast ZK sequencer rolling up to a ZK-native layer like Mina, or some high speed ZKnative L1 that emerges in a few years, this stuff all makes a lot of sense.

It would be impossible for a Kaspa node to verify Vprog ZK-proofs on an iphone like a Mina mobile rust node can. For Kaspa the best you can do is an SPV kind of deal, trusting a cluster of full nodes or a centralized RPC endpoint, and anyway on an iphone an SPV will get stopped once the app goes to background.

There are projects taking the Mina learnings and coming out in the next few years that will be the future of ZK. It'll be ZKnative top to bottom. (I'm in Asia so I'm biased, but I think ZK for the next 10 years is all about mobile.)

Kaspa is just not ZK-native. You can do a similar trick to these Vprogs (minus the DAG flourish) on Solana, but Solana is not ZKnative either. And Aztek rolling up to Eth, okay Noir is nice, but Eth is not ZK-native either. All of these suffer from the same dissonance and can't be the ZK future.

Solana is for old-fashioned smart contracts. Kaspa, like Bitcoin, is for money. My thoughts anyway.

There wasn’t a whole lot new under the sun for POW, until Kaspa. And here we are at 10bps (everybody said it couldn’t be done). I’m not sure getting ā€œthe gistā€ means you can create something nobody else has been able to. Clearly Sutton and YS disagree with you.

I don't think they would disagree at all. They would agree that Kaspa is not a ZK-native layer 1 like Mina or whatever follows Mina is. Because it's not.

But they'd say there's still a use case for yeeting ZK proofs back to Kaspa, which there is.

I'm arguing that use case is quite limited in the grand scheme, and that in the end the some future ZK-native layer 1 will win. Vertical integration always wins.

The crux here is that Zkapps are not smart contracts in the sense that all computation is executed by every node on the chain. There are an outside thing. When yeet a ZKproof to chain, the main thing people have to then be able to do is verify the proof. If not then what was it for?

If verification is hard, or slow, or clumsy, if it requires trust in an RPC call, or running a node, or an SPV, or whatever else, then it'll ultimately fail as a system, because that's all too much of an ask of the verifier.

If they can remove the issue of liquidity silos, what’s your issue with what’s being done?

My issue is on the verification side. These are ZK proofs, create them any way you like -- but they have to be verified by random people. Or what are you doing?

One reason Kaspa team are going the ZK route is because they cannot do on-chain computation (non turing complete), so they are outsourcing to the end-user's CPU or GPU, or to dedicated proving servers. Which is fine, ZK is useful. But then random people have to verify the computation that those CUPs and GPUs did.

So verification matters, and the whole liquidity silos things doesn't touch on verification at all, that's a whole other thing.

Let's say that the proof is that a passport showing age over 18 was seen at such and such a time by such and such a website. Creating the proof is and validating it on chain is one thing. Trustless verification by random people that need to check this fact is another thing. Kaspa best you can do is old-fashioned call an RPC. Not trustless. To truly verify the state themselves, they would need to run a full Kaspa node, which is, like, big. A full Kaspa node needs to maintain the entire state of all vprogs it cares about to run and re-verify the logic locally, or at least be able to access the witness data and state commitments.

Whereas with proofs on a ZK-native layer 1 anyone can do trustless self-verification on a cheap android phone from10 years ago. This is why the ZK-native layer 1s will win out in the end.

Why do you need to access the witness data if it’s cryptographically verified? That can’t change.

that comes after an "or". it's either or. but a single kaspa node would in any case have to maintain the state commitments of every signle vprog across the whole thing going back to the pruning point, whatever that is for kaspa. otherwise makes no sense. if the pruning point is only a few days back and that's out the window then you need an archive node, that's crazy heavy for an auditor that just needs double check in a trusted way (i.e no RPC call) that someone who visited a website last week was over 18.

so it fails on that end.

You don’t need archival nodes to verify what’s cryptographically verifiable. Nobody looks at the history and visually verifies UTXOs on a scale of any magnitude.

Ok so let's say you're a compliance auditor. You've been sent by the team a merkle path n' leaf set relating to a website login for a user who supposedly passed an age check, and this includes some user details. You need to run that against Kaspa to see if it checks out, and an RPC call won't do. This suspicious login is from one week ago.

Explain to me how, without an archive node, you do this.

Why is the onus on Kaspa to regulate an age check? This issue is way before Kaspa. Kaspa is only settling what’s been determined/or neglected prior. That’s like saying the internet is responsible for a 15 year old getting on a porn cute.

I don't think you're understanding what ZKproofs are. Kaspa is not regulating anything. It's just doing math.

The ZK proof in such an example would be a mathematical proof that the government-private-key-signed data from the chip inside the passport was scanned by the device (which has its own cryptographic record relating to the nfc scan and loading into memory and so on). All it is is math. The significance of the math is what humans agree on, but once that's agreed on you're gonna need to run occasional checks to make sure the math checks out (and thus that the significance of it checks out).

You can extrapolate that to anything a kaspa vprog would do. It's all the same thing, proofs of whatever math underscores whatever thing of interest (often a transaction but not always).

But end of the day, Kaspa is just not designed to do this kind of verification efficiently and trustlessly. This is all an afterthought for Kaspa. So it stands to reason it's not going to be super efficient at it.

Why would the math break? Why the occasional checks?

Because for ZK the data of "what happened" is typically off chain. It's not like an old-fashioned smart contract at all, where all of that data is on chain. So the parties will share that data with each other privately. And then the receiving party can use what is on chain (the merkle root) to determine the "mathematical truthfulness" of what they've been passed. Those are the checks.

ā€œIt is what it isā€ is the problem. Do better. Stop worshipping a religion and arrive at solutions from first principles.

You’re missing the point. You’re not following.

In one scenario, if one miner captures 51% of the hashrate, the value of the entire network drops to zero, and all of the miner's devices become junk. Therefore, no miner wants to capture 51% of the hashrate.

Increasing the size limit or decreasing block times for scaling increases the blockchain size and eliminates small miners.

It’s all dynamic variables. You've got shifting motivations, shifting rewards, shifting costs, shifting mechanisms, shifting code, shifting hardware. A lot of these "if X doesn't happen by Y then we get Z" arguments are nonsensical, because they assume every other variable is static.