I responded on Twitter too, but this is such a strange rant to me.

Bitcoin Core (contributor)’s current push for better mempool policy is *directly* related to multi-party UTXO ownership and scaling. It specifically substantially improves lighting in practice today, and will certainly be required for any future scaling technologies.

In the mean time, lots of folks (like James) continue to do research into how to scale Bitcoin (and cryptocurrencies generally). There’s no reason to care if that happens from Bitcoin Core contributors or others, and we’re still really far from having any particularly good ideas on this front. As that research continues, there’s no reason for concrete software towards short-term soft-forks. nostr:note16h9axg7x728pt7776x4hxq5vh448xz7gszjnjxsjg7rcwue3v4ysevtlgm

Reply to this note

Please Login to reply.

Discussion

Do you disagree with his numbers and prospects for UTXO ownership?

Not sure which numbers you’re referring to, but I think I agree with the *need*, just strongly disagree with the idea that we have any great solutions. From where I sit, the ideas for scaling using covenants often don’t provide all that much scale, and almost always make other tradeoffs.

IMHO one of the more compelling ideas is timeout trees (which sadly was proposed after people had kinda given up on CTV). But it makes a huge tradeoff - if you get screwed it costs you 10-20x more to FC than Lightning. Worse, an operator can create a “ghetto” of poor users and screw them all to steal from them. *but* it gets you great optimistic-case scaling!

Most things end up looking like that, and it leaves me pretty unexcited…except that the research is moving at a clip! I’m excited to see what the research comes up with, in the mean time I’m still building lightning.

I share your skepticism about how much covenants can scale on-chain throughput. James makes a good point about CTV alleviating the thundering herd problem. But if fees are permanently higher than what people can afford it won't help.

I agree with you that all proposed L2 solutions based on covenants are horrible (except for Mercury statechains but it's not fully trustless so I guess that makes everybody ignore them?).

I don't understand the last paragraph. What is this research you're talking about? I want to be excited about something too.

There’s a bunch of research on the limits of things - Shielded CSV/rollups (though want to see the limits of a CSV thing if we build a version you can do lightning on top of), Ark/timeout trees is a cool set of ideas that feels like it could be extended, etc. The results certainly don’t point to us having any silver bullets, but I think some of those things probably with some more tweaks may give us a nice direction. But certainly want more analysis.

By the way, do we really any prospects of creating a trustless decentralized L2 on Bitcoin ever? Isn't it true that Ethereum has had all the ability to make any covenants they want for years and after so much time trying to find a scalable solution the best they got was zk rollups that centralize everything in the hands of a company and require a ton of space anyway that has to be acknowledged by the chain consensus?

In other words, even if we had all the opcodes we wanted by magic we would still be just as bad at scaling as Ethereum is currently?

Yea, I mean the ETH zkRollups are super centralized but to some extent that’s a consequence of these things being hard to build so they need an “escape valve” in practice. There’s not a lot of motivation to remove that but I assume it’ll get removed eventually…. just may be a while.

I do think it’s somewhat informative that ETH only built *rollups*, which certainly don’t have a huge scalability multiple themselves (doubly so for payments, though it’s important to note ETH is trying to scale computation, not payments).

My thoughts here are definitely unrefined, but I am sceptical of the "onchain rush solved by covenants" argument.

1. You're assuming fee volatility. I expect this to reduce over time, as regular Bitcoin usages adapt their behaviour and smooth fees

2. You're assuming wallet infrastructure which is pre-built to handle this case.

3. You're changing the deal, so the *recipient* now pays fees. If you shape things as payment trees, the tradeoff gets worse (approaching twice the weight of just paying normally, requiring that much fee volatility to make sense), and you introduce games between the recipients to figure out who pays fees.

4. You have created a novel financial instrument on fee futures, not something I accept as a "payment", as I have not received it, and you've offloaded an unknown level of costs to me to "collect" it.

5. In real bankruptcy, this is *not what you want*. You don't want your funds stuck on chain. Many might want theirs transferred in one tx to Coinbase. Others will want payment over lightning, or fiat.

6. You can do this badly, today. You can publish a zero-fee tx which pays everyone, or even a tree. That at least proves you have the funds, and can be seen by existing wallets. This, of course, requires the mempool changes which James complains about.

In summary, I don't see congestion trees actually being used: certainly not for this bank-run-in-high-fee scenario.

nostr:nevent1qqs83jwmj5g4fnqa94s5nnztnwje3v4kf4wgyyv7w4dss6njvznne0cpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsr6tj32zrfn7v0pu4aheaytdnnc6rluepq73ndc2tdjzus34gat9qrqsqqqqqpn6m7kw

A couple counter-points:

> assuming wallet infrastructure

> the *recipient* now pays fees

The sender could be the one responsible for getting txs confirmed (CPFPing as needed), which simplifies things a bunch.

You don't really need specialized wallet support in this case, you could just send txs by email (for emergency if the sender doesn't confirm them, typically not used) and have users verify inclusion with some external website/tool.

Given that senders are expected to be exchanges and the like, it seems plausible that users could mostly rely on them to get txs confirmed and wouldn't mind waiting for it to show up. (as long as they have a backup)

Admittedly this does get quite a bit more complicated if the receiver is responsible for fees.

> You don't want your funds stuck on chain. Many might want theirs transferred in one tx to Coinbase.

A cool thing about congestion control is that if the recipient is a custodian with sufficient liquidity, they could credit their customers immediately.

So basically if you withdraw from one exchange into another and provide them with a payment inclusion proof, you won't actually have to wait for it to unroll for it to show up in your balance.

btw I was just recently playing with a simple congestion control implementation written in (an unofficially released version of) Minsc, which might be of interest to people following this thread

It's available here: https://min.sc/v0.3/#github=examples/ctv-congestion-control.minsc

> You can do this badly, today. You can publish a zero-fee tx which pays everyone, or even a tree.

This doesn't prevent the sender from signing double-spends though, making this a promise rather than a guarantee, so I wouldn't say that you can do this today.

I would go one step further and suggest that when someone is throwing mud and demanding action towards a specific choice, this is evidence that the choice is the wrong one. Emotion can be a signal of commitment, but a brief survey of history demonstrates it can easily be used to make people to act against their own best interests.

Good choices tend to be based on well reasoned arguments that recognize root problems and the compromises of a decision. If something is important, it's worth putting in the effort to make good decisions.

mempool policy is referring to datacarrier or something else?

what surprised me was how Core leaned on nonsensical "disputes" to justify their inaction (well over a year now).

either fix datacarrier or remove it. The discussion of endless cat and mouse game possibilities for data storage, or the distraction of how mempool policy could induce out of band tx behavior, all that nonsense, is all irrelevant and should happen elsewhere

id imagine most users are fine with Core's mempool policy as it stands provided the available configurations actually works (it doesnt). i dont think mempool policy is a distraction at all because it is hard to determine the effectiveness of available scaling solutions if basic things like this are broken.

running through another softfork for the sake to add any new set of op_codes (which are available to use on Liquid already ) is irresponsible

Thanks for sharing your thoughts on nostr. I value your opinion and am glad to have it here.

> we’re still really far from having any particularly good ideas on this front.

Hard disagree on this, though. There are many proposals that pass muster, IMO. I favor LNHANCE which combines LN-symmetry and covenant capabilities. I am confused why you would characterize this as not a particularly good idea. You know better than almost everyone what a massive upgrade LN-symmetry would be.

I don’t think LN-symmetry solves real user pain points. It makes software simpler, and maybe is important if we move towards multi-party channels, but that’s a ways off.