Thanks for sharing. The thing is.... this is when you had your channels set to 700 ppm, right? I see you have been tweaking these fees recently.. I think you had a long period of zero fees, followed by fees around 1500, followed now by 700 -- given these changes, isn't it likely the network might "adjust" over time, and your 0.6% profit estimate might therefore vary widely? Or you are confident that, if you continued with 700 ppm, you would likely stay around 0.6% long-term? (FYI, we run The Megalith Node, had some channels in common for a while but none currently.)
Discussion
If briefly, a year ago I was adjusting fees step-by-step from zero to about a thousand ppm and recorded in a spreadsheet how much the Lightning network was generating per day. After reaching a thousand ppm, I reduced the fees again step-by-step and continued recording. I ended up with a fairly large spreadsheet. The fees fluctuated daily, of course, but when I looked at which fees were generating the most satoshis per day, 700 ppm seemed to yield more on average than others. Not long ago, I tried feeding this spreadsheet to ChatGPT, and based on its analysis, it suggested using about 1185 ppm to maximize profit. I tried that, but it turned out not to be the best option. A few days ago, I decided to set it back to 700 ppm, as I did about a year ago, and again, this number proved very close to generating the most daily commission revenue. So I settled on that figure.
I used to use a fee-balancing method, but then decided to abandon it. I'll look for where I might have written about it. Once I find it, I'll send you the link.
Interesting that you abandoned the fee-based balancing method.
I might be wrong, but it seems to me that channel balancing through fees has some significant downsides. Let me try to explain them.
I personally witnessed a situation where I tried to make a payment from a mobile wallet and couldn't do it. The route went through my nodes. I started checking the logs and found that when the wallet sent the payment, it used fee parameters that were in the network graph several hours earlier. They were later changed to higher ones as liquidity decreased. At that time, I had an algorithm where if my channel's liquidity got depleted, the price would increase. For this reason, the payment couldn't go further. The sender's wallet used a lower fee because it saw that in the network graph, which didn't relay fresh data well due to the protocol's significant delay. Theoretically, when a payment doesn't go through due to fees, the intermediate node where it fails forms a response and sends new fees in that response. Upon receiving such a packet, the sender's wallet should, in theory, adjust the fees and resend it. Then, the payment would go through. But this doesn't always happen. It may be a bug in the system implementation or other reasons I can't fully understand. But at that time, I concluded that if I had fixed fees, that payment case would've succeeded, as the liquidity still allowed it.
Then it occurred to me that balancing channels based on such a scheme—setting low fees when you have a lot of liquidity and high fees when it depletes—creates a sort of logical contradiction. When you could earn from fees with your bitcoins locked on your side, with this algorithm, you earn very little while your bitcoins are locked and available. When the channel depletes and you have fewer locked bitcoins, you need to increase the fee. This makes it unlikely that you'll earn more than when you had funds in the channel. It all seemed somewhat illogical. Therefore, I thought it would be better to just set a fixed fee. After all, when the channel depletes, adjusting fees only signals to future senders that we can't process a large payment through this channel. We financially discourage them from sending through us. But there are other ways to avoid losing profit (since higher fees often encourage senders not to use your channel at all). For example, if we lack funds in a channel and someone tries to send through it, our Lightning Network server would simply send a packet to the sender indicating that funds can't be sent through this channel temporarily.
Interesting. The first think you are discussing ... "The sender's wallet used a lower fee because it saw that in the network graph, which didn't relay fresh data well due to the protocol's significant delay. " -- this is something I've seen a lot on the Lightning Network, but I've never seen it quantified. But anecdotally, there do seem to be nodes that are used to for payments which often "cache" old fee information from the network. My assumption is that many might be mobile clients with poor connectivity. It's actually for this reason that none of our nodes use "fast" automatic fee changes ... my experience is that you shouldn't change fees more than maybe once or twice a week for any given channel, because of this "caching" behavior. I've also seen attempted payments which are clearly using fee information that is several days out-of-date.
Regarding this second issue, you write: "our Lightning Network server would simply send a packet to the sender indicating that funds can't be sent through this channel temporarily." Do you mean like with LND's "updatechanstatus" API? https://lightning.engineering/api-docs/api/lnd/router/update-chan-status/ .... if so, wouldn't this prevent your depleting channel from "refilling" from the other side? Or maybe you are referring to a different strategy?
Regarding your last question, I meant that the Lightning Network protocol includes an onion packet type, when a payment fails due to insufficient balance. It needs to be pushed further. Then the intermediate node on the route sends back an onion to the sender containing an error code indicating that the channel temporarily cannot send the payment due to fee changes. It also specifies the new fees in the same onion. I haven't looked at it right now, I don't have the protocol on hand, but it's definitely there. And it's described somewhere in the Bolt specifications.
I got a bit confused. Also, answering your question, I mixed up a bit by indicating the wrong type of error. But generally, when a payment cannot be sent further because, for example, we don't have enough balance on our side to send it further, an onion is also sent, which indicates a temporary error code, and the onion is also sent back to the sender. From it, they can see which node along the way sent it, so they don't send the same-sized payment through the same channel again.
Actually, I was talking in the context that instead of specifically regulating fees in channels that are drained, you can just rely on such packets to be sent to the sender if the payment cannot go through due to insufficient liquidity. And if there is still enough liquidity to make the payment, even if it's small, let such a payment go through at a fixed rate that was there at any balance of our channel. I hope I expressed myself more clearly.
You wrote....
"you can just rely on such packets to be sent to the sender if the payment cannot go through due to insufficient liquidity"
.... Right -- "temporary_channel_failure" -- here https://github.com/lightning/bolts/blob/2b9c73b48c1920df8e7e867d110fab2d720a059b/04-onion-routing.md?plain=1#L1157 .
The issue is that hitting failures like this has the effect of increasing payment latency for users, and also, if there are too many failures, the payment will time out entirely. Shouldn't we, for the sake of users, be trying to insulate payers from potential failures?
And ....isn't a good way to do THAT is to simply signal to the network "hey, don't use this channel in this direction, I'm signaling this by putting my fees high"..... ?
I would think rather than fees, what you are both describing is the perfect use case for the max_HTLC setting? I know some noderunners don't use this setting for various reasons, but when it is used can signal to the network available liquidity (or lack of) while still minimizing or even preventing temportary_channel_failures.
Sorry that was the incorrect deeplink to the Bolts.. here is where I intended:
" - if during forwarding to its receiving peer, an otherwise unspecified,
transient error occurs in the outgoing channel (e.g. channel capacity reached,
too many in-flight HTLCs, etc.):
- return a `temporary_channel_failure` error."