Decentralization is hard. Everything sucks. nostr:note17d3estex3ftem4v8kztxwhrvpzpl8jwx4jdtcn5xez66f4lrkm5qe9sjjd
(The LDK “historical model”, however, seems to do okay, keeping histograms of the liquidity bounds)
Err, no, sorry, that’s wrong. Degrading instantly is bad (the learning does help!), but the model itself is worse than the naive “I dunno, 50/50 always” “model”, even when you learn.
At least my initial analysis of the data seems to say that degrading instantly is better than any other time constant 😭. (May just be some nasty bug in the way we’re calculating probabilities?)
It is important to empathize with frustrated users. It's sometimes an unattainable ideal, but who hasn't hit software that Just Doesn't Work? We don't really care if it's just something about our setup, or fundamentally broken, or a completely unhelpful error message: it's an incredibly frustrating feeling of impotence.
Sure, you shouldn't take it out on the devs you aren't paying, but we're all human.
I can't speak for all developers, but I became a FOSS coder in the Linux Kernel. That gave me a pretty thick skin: Linus could be an ass, and even when he was wrong there was no appeal. So I generally find it easier to sift through the users' frustrations and try to get to the problem they are having.
https://github.com/ElementsProject/lightning/issues/7180
And often it turns out, I agree! This shit should just Work Better!
CLN payments are the example here, and it was never my priority. That might seem weird, but the first production CLN node was the Blockstream store. So we're good at *receiving* payments! But the method of routing and actually making payments is neither spec-defined nor a way to lose money. It's also hard to measure success properly, since it depends on the vagaries of the network at the time
But it's important, turns out :). And now we see it first-hand since we host nodes at Greenlight. So this release, unlike most, was "get a new pay system in place" (hence we will miss our release date, for the first time since we switched to date-based releases). Here's a list of what we did:
1. I was Release Captain. I was next in the rotation anyway, but since this was going to be a weird release I wanted to take responsibility.
2. I wrote a compressor for the current topology snapshot. This lets us check a "known" realistic data set into the repo for CI.
3. I wrote a fake channel daemon, which uses the decompressed topology to simulate the entire network.
4. I pulled the min-cost-flow solver out of renepay into its own general plugin, "askrene". This lets anyone access it, lets @lagrange further enhance it, and makes it easier for custom pay plugins to exist: Michael of Boltz showed how important this is with mpay.
5. A new interface for sending HTLCs, which mirrors the path of payments coming from other nodes. In particular, this handles self-pay (including payments where part is self-pay and part remote!) and blinded path entry natively, just like any other payment.
6. Enhancements and cleanups to our "libplugin" library for built-in plugins, to avoid nasty hacks pay has to do.
7. Finally, a new "xpay" command and plug-in. After all the other work, this was fairly simple. In particular, I chose not to be bound to the current pay API, which is a bit painful in the short term.
8. nostr:nprofile1qqsx533y9axh8s2wz9xetcfnvsultwg339t3mkwz6nayrrdsrr9caagppemhxue69uhkummn9ekx7mp0fqcrlc changed our gossip code to be more aggressive: you can't route if you can't see the network well!
Importantly, I haven't closed this issue: we need to see how this works in the Real World! Engineers always love rewriting, but it can actually make things worse as lessons are lost, and workarounds people were using before stop being effective.
But after this fairly Herculean effort, I'm going to need to switch to other things for a while. There are always other things to work on!
Dunno if you saw https://bluematt.bitcoin.ninja/2024/11/22/ln-routing-replay/ but I recently started being more rigorous about our pathfinding scorer. Might be something to play with, it seems like the simple “just keep upper and lower bound on each channel’s liquidity” approach performs *worse* than always assigning each hop a 50% success probability. Keeping a histogram of those bounds, though, does reasonably good.
Sudden distribution of cash to a bunch of people could cause a nontrivial bump in spending, offsetting the crash in spending being predicted due to layoffs and other economic headwinds. It was mostly a joke, though, I don’t think there’s that many people with that much bitcoin trying to sell that much bitcoin with that much demand for bitcoin…
Bitcoin preventing the post-high-interest-rates 2025 recession that everyone is predicting would be kinda wild.
The term developed, I think, because the rest of non-Bitcoin crypto also goes in cycles. Many of those folks aren’t really all that aware of the Bitcoin halving, even though the usual pattern is a Bitcoin price spike post-halving, then some of that money rotates into ETH/etc as the Bitcoin run-up loses steam, then as the ETH/etc run-up loses steam the shitcoin/memecoin/garbage pump starts.
Lots of the ETH/memcoin bros don’t really think about Bitcoin’s halving so they just see a cycle, even though it’s driven by Bitcoin.
The options are that or constantly be annoying other people cause you aren’t doing stuff you promised them to keep focus 😭
Yea, I mean the ETH zkRollups are super centralized but to some extent that’s a consequence of these things being hard to build so they need an “escape valve” in practice. There’s not a lot of motivation to remove that but I assume it’ll get removed eventually…. just may be a while.
I do think it’s somewhat informative that ETH only built *rollups*, which certainly don’t have a huge scalability multiple themselves (doubly so for payments, though it’s important to note ETH is trying to scale computation, not payments).
There’s a bunch of research on the limits of things - Shielded CSV/rollups (though want to see the limits of a CSV thing if we build a version you can do lightning on top of), Ark/timeout trees is a cool set of ideas that feels like it could be extended, etc. The results certainly don’t point to us having any silver bullets, but I think some of those things probably with some more tweaks may give us a nice direction. But certainly want more analysis.
This is much better phrased than I can. I’m really, really happy that lots of people are doing *research* into how and what kind of covenants get us the best outcome, but that doesn’t mean we need to rush into anything, maybe the opposite. nostr:note1yaen3npjmdnf8mfslwtjt8640kct7qkddyl0mxdk39a45kkeu5pq7l4aq6
No one will ever use congestion control as defined by CTV.
I don’t think LN-symmetry solves real user pain points. It makes software simpler, and maybe is important if we move towards multi-party channels, but that’s a ways off.
I think interesting ideas using it were eventually proposed…..after most people had given up on CTV 🤷♂️
Not sure which numbers you’re referring to, but I think I agree with the *need*, just strongly disagree with the idea that we have any great solutions. From where I sit, the ideas for scaling using covenants often don’t provide all that much scale, and almost always make other tradeoffs.
IMHO one of the more compelling ideas is timeout trees (which sadly was proposed after people had kinda given up on CTV). But it makes a huge tradeoff - if you get screwed it costs you 10-20x more to FC than Lightning. Worse, an operator can create a “ghetto” of poor users and screw them all to steal from them. *but* it gets you great optimistic-case scaling!
Most things end up looking like that, and it leaves me pretty unexcited…except that the research is moving at a clip! I’m excited to see what the research comes up with, in the mean time I’m still building lightning.
I responded on Twitter too, but this is such a strange rant to me.
Bitcoin Core (contributor)’s current push for better mempool policy is *directly* related to multi-party UTXO ownership and scaling. It specifically substantially improves lighting in practice today, and will certainly be required for any future scaling technologies.
In the mean time, lots of folks (like James) continue to do research into how to scale Bitcoin (and cryptocurrencies generally). There’s no reason to care if that happens from Bitcoin Core contributors or others, and we’re still really far from having any particularly good ideas on this front. As that research continues, there’s no reason for concrete software towards short-term soft-forks. nostr:note16h9axg7x728pt7776x4hxq5vh448xz7gszjnjxsjg7rcwue3v4ysevtlgm
To be fair, we’ve also cut the bandwidth needed to stream live video by many multiples.