Profile: f0feda6a...
đź“… Original date posted:2023-08-03
🗒️ Summary of this message: The author clarifies their intention to encourage research on monetary-based denial-of-service deterrence and provides a rough proof-of-work as an approach to explore. They also discuss the challenges of implementing fees based on the time an HTLC is held.
📝 Original message:
On Mon, Jul 31, 2023 at 02:42:29PM -0400, Clara Shikhelman wrote:
> > A different way of thinking about the monetary approach is in terms of
> > scaling rather than deterrance: that is, try to make the cost that the
> > attacker pays sufficient to scale up your node/the network so that you
> > continue to have excess capacity to serve regular users.
Just to clarify, my goal for these comments was intended to be mostly
along the lines of:
"I think monetary-based DoS deterrence is still likely to be a fruitful
area for research if people are interested, even if the current
implementation work is focussed on reputation-based methods"
At least the way I read the summit notes, I could see people coming away
with the alternative impression; ie "we've explored monetary approaches
and think there's nothing possible there; don't waste your time", and
mostly just wanted to provide a counter to that impression.
The scheme I outlined was mostly provided as a rough proof-of-work to
justify thinking that way and as perhaps one approach that could be
researched further, rather than something people should be actively
working on, let alone anything that should distract from working on the
reputation-based approach.
After talking with harding on irc, it seems that was not as obvious in
text as it was in my head, so just thought I'd spell it out...
> As for liquidity DoS, the “holy grail” is indeed charging fees as a
> function of the time the HTLC was held. As for now, we are not aware of a
> reasonable way to do this.
Sure.
> There is no universal clock,
I think that's too absolute a statement. The requirement is either that
you figure out a way of using the chain tip as a clock (which I gave a
sketch of), or you setup local clocks with each peer and have a scheme
for dealing with them being slightly out of sync (and probably use the
chain tip as a way of ensuring they aren't very out of sync).
> and there is no way
> for me to prove that a message was sent to you, and you decided to pretend
> you didn't.
All the messages in the scheme I suggested involve commitment tx updates
-- either introducing/closing a HTLC or making a payment for keeping a
HTLC active and tying up your counterparty's liquidity. You don't need to
prove that messages were/weren't sent -- if they were, your commitment
tx is already updated to deal with it, if they weren't but should have
been, your channel is in an invalid state, and you close it onchain.
To me, proving things seems like something that comes up in reputation
based approaches, where you need to reference a hit on someone else's
reputation to avoid taking a hit on yours, rather than a monetary based
one, where all you should need to do is check you got paid for whatever
service you were providing, and conversely pay for whatever services
you've been requiring.
> It can easily happen that the fee for a two-week unresolved
> HTLC is higher than the fee for a quickly resolving one.
That should be the common case, yes, and it's problematic if you can have
both a high percentage fee (or a high amount), and a distant timeout.
But that may be a situation you can avoid, and I gave a sketch of one
way you could do that.
> I think this is another take on time-based fees. In this variation, the
> victim is trying to take a fee from the attacker. If the attacker is not
> willing to pay the fee (and why would they?), then the victim has to force
> close. There is no way for the victim to prove that it is someone
> downstream holding the HTLC and not them.
The point is that you get paid for your liquidity beind held hostage;
whether the channel is closed or stays open. If that works, there's
no victim in this scenario -- you set a price for your liquidity to be
reserved over time in the hope that the payment will eventually succeed,
and you get paid that fee, until whoever currently holds the HTLC the
chance of success isn't worth the ongoing cost anymore.
The point of force closing it the same as any force close -- your
counterparty stops following the protocol you both agreed to. That can
happy any time, even just due to cosmic rays.
> > > - They’re not large enough to be enforceable, so somebody always has
> > > to give the money back off chain.
> > If the cap is 500ppm per block, then the liquidity fees for a 2000sat
> > payment ($0.60) are redeemable onchain.
> This heavily depends on the on-chain fees, and so will need to be
> updated as a function of that, and adds another layer of complication.
I don't think that's true -- this is just a direct adjustment to the
commitment tx balance outputs, so doesn't change the on-chain size/cost
of the commitment tx.
The link to on-chain fees (at least in the scheme I outlined) is via
the cap (for which I gave an assumed value above) -- you don't want the
extra profit your counterparty would get from from that adjustment to
outweigh something like sum(their liquidity value of locking their funds
up due to a unilateral close; the unilateral close fees that they pay;
channel reopening costs).
One thing I wonder a bit about is if it might be easier to mess around
with some of these ideas somewhere that supported millisat/picobtc values
natively (and perhaps also had low fees), that way you don't have to
worry as much about the difference between things that are included in
the commitment tx versus are just hopefully true due to the friction of
closing/reopening channels. You could probably make picobtc precision
happen on-chain with an issued asset on Liquid, if you didn't mind
limiting it to 2100BTC total (~$63M), and perhaps put issuance under a
smart contract that automatically swaps it for L-BTC at a 1-1M ratio.
Unfortunately, probably a lot of hassle to set that up, let alone adapt
a lightning client to sit on top of it though.
Cheers,
aj
đź“… Original date posted:2023-07-25
🗒️ Summary of this message: Proposal to use a tapscript branch instead of MuSig2 signing for unilateral closes in the Lightning Network. Discusses dynamic fees and Bitcoin as a universal clock.
📝 Original message:
Hi Zeeman,
> A proposal I made in the Signal group after the summit would be to not
use MuSig2 signing for commitment transactions (== unilateral closes).
> Instead, we add a tapscript branch that is just ` OP_CHECKSIGVERIFY
OP_CHECKSIG` and use that for unilateral closes.
> We only sign with MuSig2 on mutual closes, after we have negotiated
closing fees (whether that is the current long closing conversation, or
simplified closing) so that only mutual closes require nonce
> storage.
>
> As mutual closes assume continuous connectivity anyway, we can keep
counterparty nonces in volatile RAM and not store nonces in persistent
storage; if a disconnection occurs, we just remove it from
> volatile RAM and restart the mutual close negotiation on reconnection.
>
> This is more palatable as even with a very restrictive hardware device
you do not have to store the peer nonce in persistent storage.
> The hope is that mutual closes dominate over unilateral closes.
Hardware devices with reasonable persistent storage sounds cheaper than the
additional fee-bumping reverse one (locally or a third-party), must
conserve to pay for the probabilistic worst-case ` OP_CHECKSIGVERIFY
OP_CHECKSIG` satisfying witness.
> Conditinoal fees already on the Lightning Network are already dynamic,
with many people (including myself) writing software that measures demand
and changes price accordingly.
> Why would unconditional fees be necessarily static, when there is no
mention of it being static?
While I'm in sync with you on the Lightning Network being a system driven
by demand and charge price accordingly, some of the recent jamming
mitigation proposals where build on the proposal of "static fees" or
unconditional fees, e.g https://eprint.iacr/2022/1454.pdf. As soon as you
start to think in terms of dynamic fees, you start to have issues w.r.t
gossip convergence delays and rate-limitation of your local unconditional
fees updates.
> Given a "stereotypical" forwarding node, what is the most likely
subjective valuation?
> If a node is not a stereotypical forwarding node, how does it deviate
from the stereotypical one?
Answer is a function of your collection of historical forwarding HTLC
traffic and secondary source of information.
Somehow the same as for base-layer fee-estimation, the more consistent your
mempool data-set, the better will be your valuation.
> The problem is that the Bitcoin clock is much too coarsely grained, with
chain height advances occasionally taking several hours in the so-called
"real world" I have heard much rumor about.
Sure, I still think Bitcoin as a universal clock is still the most costly
for a Lightning counterparty to game on, if it has to be used as a
mechanism to arbitrate the fees / reputation paid for the in-flight
duration of a HTLC. Even relying on timestamp sounds to offer some margin
of malleation (e.g advance of max 2h, consensus rule) if you have hashrate
capabilities.
> Would not the halt of the channel progress be considered worthy of a
reputation downgrade by itself?
That's an interesting point, rather than halting channel progress being
marked as a strict reputation downgrade, you could negotiate a "grace
delay" during which channel progress must be made forward (to allow for
casualties like software upgrade or connectivity issue).
Best,
Antoine
Le lun. 24 juil. 2023 Ă 09:14, ZmnSCPxj
>
>
> > > - For taproot/musig2 we need nonces:
> > > - Today we store the commitment signature from the remote party. We
> don’t need to store our own signature - we can sign at time of broadcast.
> > > - To be able to sign you need the verification nonce - you could
> remember it, or you could use a counter:
> > > - Counter based:
> > > - We re-use shachain and then just use it to generate nonces.
> > > - Start with a seed, derive from that, use it to generate nonces.
> > > - This way you don’t need to remember state, since it can always be
> generated from what you already have.
> > > - Why is this safe?
> > > - We never re-use nonces.
> > > - The remote party never sees your partial signature.
> > > - The message always stays the same (the dangerous re-use case is
> using the same nonce for different messages).
> > > - If we used the same nonce for different messages we could leak our
> key.
> > > - You can combine the sighash + nonce to make it unique - this also
> binds more.
> > > - Remote party will only see the full signature on chain, never your
> partial one.
> > > - Each party has sign and verify nonces, 4 total.
> > > - Co-op close only has 2 because it’s symmetric.
> >
> > (I don't know when mailing list post max size will be reached)
> >
> > Counter-based nonces versus stateful memorization of them from a user
> perspective depends on the hardware capabilities you have access to.
> >
> > The taproot schnorr flow could be transparent from the underlying
> signature scheme (FROST, musig2, TAPS in the future maybe).
>
> A proposal I made in the Signal group after the summit would be to not use
> MuSig2 signing for commitment transactions (== unilateral closes).
>
> Instead, we add a tapscript branch that is just ` OP_CHECKSIGVERIFY
> OP_CHECKSIG` and use that for unilateral closes.
> We only sign with MuSig2 on mutual closes, after we have negotiated
> closing fees (whether that is the current long closing conversation, or
> simplified closing) so that only mutual closes require nonce storage.
>
> As mutual closes assume continuous connectivity anyway, we can keep
> counterparty nonces in volatile RAM and not store nonces in persistent
> storage; if a disconnection occurs, we just remove it from volatile RAM and
> restart the mutual close negotiation on reconnection.
>
> This is more palatable as even with a very restrictive hardware device you
> do not have to store the peer nonce in persistent storage.
> The hope is that mutual closes dominate over unilateral closes.
>
>
>
> > > - We run into the same pricing issues.
> > > - Why these combinations?
> > > - Since scarce resources are essentially monetary, we think that
> unconditional fees are the simplest possible monetary solution.
> > > - Unconditional Fees:
> > > - As a sender, you’re building a route and losing money if it doesn’t
> go through?
> > > - Yes, but they only need to be trivially small compared to success
> case fee budgets.
> > > - You can also eventually succeed so long as you retry enough, even if
> failure rates are very high.
> > > - How do you know that these fees will be small? The market could
> decide otherwise.
> >
> > Static unconditional fees is a limited tool in a world where rational
> economic actors are pricing their liquidity in function of demand.
>
> Conditinoal fees already on the Lightning Network are already dynamic,
> with many people (including myself) writing software that measures demand
> and changes price accordingly.
> Why would unconditional fees be necessarily static, when there is no
> mention of it being static?
>
>
> > > - We have to allow some natural rate of failure in the network.
> > > - An attacker can still aim to fall just below that failure threshold
> and go through multiple channels to attack an individual channel.
> > > - THere isn’t any way to set a bar that an attacker can’t fall just
> beneath.
> > > - Isn’t this the same for reputation? We have a suggestion for
> reputation but all of them fail because they can be gamed below the bar.
> > > - If reputation matches the regular operation of nodes on the network,
> you will naturally build reputation up over time.
> > > - If we do not match reputation accumulation to what normal nodes do,
> then an attacker can take some other action to get more reputation than the
> rest of the network. We don’t want attackers to be able to get ahead of
> regular nodes.
> > > - Let’s say you get one point for success and one for failure, a
> normal node will always have bad reputation. An attacker could then send 1
> say payments all day long, pay a fee for it > and gain reputation.
> > > - Can you define jamming? Is it stuck HTLCs or a lot of 1 sat HTLCS
> spamming up your DB?
> >
> > Jamming is an economic notion, as such relying on the subjectivism of
> node appreciation of local ressources.
>
> Given a "stereotypical" forwarding node, what is the most likely
> subjective valuation?
> If a node is not a stereotypical forwarding node, how does it deviate from
> the stereotypical one?
>
>
> > > - The dream solution is to only pay for the amount of time that a HTLC
> is held in flight.
> > > - The problem here is that there’s no way to prove time when things go
> wrong, and any solution without a universal clock will fall back on
> cooperation which breaks down in the case of > an attack.
> >
> > There is a universal clock in Bitcoin called the chain height advances.
>
> The problem is that the Bitcoin clock is much too coarsely grained, with
> chain height advances occasionally taking several hours in the so-called
> "real world" I have heard much rumor about.
>
>
> > > - What NACK says is: I’ve ignored all of your updates and I’m
> progressing to the next commitment.
> >
> > If resource bucketing or link-level liquidity management starts to be
> involved, one can mask behind "NACK" to halt the channel progress, without
> the reputation downgrade. Layer violation issue.
>
> Would not the halt of the channel progress be considered worthy of a
> reputation downgrade by itself?
>
> Regards,
> ZmnSCPxj
>
-------------- next part --------------
An HTML attachment was scrubbed...
đź“… Original date posted:2023-07-26
🗒️ Summary of this message: Carla thanks everyone for their participation in the meeting and acknowledges Wolf for hosting in NYC and Michael Levin for providing notes.
📝 Original message:
On Wed, Jul 19, 2023 at 09:56:11AM -0400, Carla Kirk-Cohen wrote:
> Thanks to everyone who traveled far, Wolf for hosting us in style in
> NYC and to Michael Levin for helping out with notes <3
Thanks for the notes!
Couple of comments:
> - What is the “top of mempool” assumption?
FWIW, I think this makes much more sense if you think about this as a
few related, but separate goals:
* transactors want their proposed txs to go to miners
* pools/miners want to see the most profitable txs asap
* node operators want to support bitcoin users/businesses
* node operators also want to avoid wasting too much bandwidth/cpu/etc
relaying txs that aren't going to be mined, both their own and that
of other nodes'
* people who care about decentralisation want miners to get near-optimal
tx selection with a default bitcoind setup, so there's no secret
sauce or moats that could encourage a mining monopoly to develop
Special casing lightning unilateral closes [0] probably wouldn't be
horrible. It's obviously good for the first three goals. As far as the
fourth, if it was lightning nodes doing the relaying, they could limit
each unilateral close to one rbf attempt (based on to_local/to_remote
outputs changing). And for the fifth, provided unilateral closes remain
rare, the special config isn't likely to cause much of a profit difference
between big pools and small ones (and maybe that's only a short term
issue, and a more general solution will be found and implemented, where
stuff that would be in the next block gets relayed much more aggressively,
even if it replaces a lot of transactions).
[0] eg, by having lightning nodes relay the txs even when bitcoind
doesn't relay them, and having some miners run special configurations
to pull those txs in.
> - Is there a future where miners don’t care about policy at all?
Thinking about the different goals above seems like it gives a clear
answer to this: as far as mining goes, no there's no need to care
about policy restricitions -- policy is just there to meet other goals:
making it possible to run a node without wasting bandwidth, and to help
decentralisation by letting miners just buy hardware and deploy it,
without needing to do a bunch of protocol level trade secret/black magic
stuff in order to be competitive.
> - It must be zero fee so that it will be evicted.
The point of making a tx with ephemeral outputs be zero fee is to
prevent it from being mined in non-attack scenarios, which in turn avoids
generating a dust utxo. (An attacking miner can just create arbitrary
dust utxos already, of course)
> - Should we add trimmed HTLCs to the ephemeral anchor?
> - You can’t keep things in OP_TRUE because they’ll be taken.
> - You can also just put it in fees as before.
The only way value in an OP_TRUE output can be taken is by confirming
the parent tx that created the OP_TRUE output, exactly the same as if
the value had been spent to fees instead.
Putting the value to fees directly would violate the "tx must be zero
fee if it creates ephemeral outputs" constraint above.
> ### Hybrid Approach to Channel Jamming
> - Generally when we think about jamming, there are three “classes” of
> mitigations:
> - Monetary: unconditional fees, implemented in various ways.
> - The problem is that none of these solutions work in isolation.
> - Monetary: the cost that will deter an attacker is unreasonable for an
> honest user, and the cost that is reasonable for an honest user is too low
> for an attacker.
A different way of thinking about the monetary approach is in terms of
scaling rather than deterrance: that is, try to make the cost that the
attacker pays sufficient to scale up your node/the network so that you
continue to have excess capacity to serve regular users.
In that case, if people are suddenly routing their netflix data and
nostr photo libraries over lightning onion packets, that's fine: you
make them pay amazon ec2 prices plus 50% for the resources they use,
and when they do , you deploy more servers. ie, turn your attackers and
spammers into a profit centre.
I've had an email about this sitting in my drafts for a few years now,
but I think this could work something like:
- message spam (ie, onion traffic costs): when you send a message
to a peer, pay for its bandwidth and compute. Perhaps something
like 20c/GB is reasonable, which is something like 1msat per onion
packet, so perhaps 20msat per onion packet if you're forwarding it
over 20 hops.
- liquidity DoS prevention: if you're in receipt of a HTLC/PTLC and
aren't cancelling or confirming it, you pay your peer a fee for
holding their funds. (if you're forwarding the HTLC, then whoever you
forwarded to pays you a slightly higher fee, while they hold your
funds) Something like 1ppm per hour matches a 1% pa return, so if
you're an LSP holding on to a $20 payment waiting for the recipient to
come online and claim it, then you might be paying out $0.0004 per hour
(1.4sat) in order for 20 intermediate hops to each be making 20%
pa interest on their held up funds.
- actual payment incentives: eg, a $20 payment paying 0.05% fees
(phoenix's minimum) costs $0.01 (33sat). Obviously you want this
number to be a lot higher than all the DoS prevention fees.
If you get too much message spam, you fire up more amazon compute
and take maybe 10c/GB in profit; if all your liquidity gets used up,
congrats you've just gotten 20%+ APY on your bitcoin without third party
risk and you can either reinvest your profits or increase your fees; and
all of those numbers are just noise compared to actual payment traffic,
which is 30x or 1500x more profitable.
> - How do you know that these fees will be small? The market could decide
> otherwise.
If the liquidity or message fees on the network are high it's easy to
spin up new lightning nodes at slightly lower fees and steal all that
traffic while still being hugely profitable.
> - The problem here is that there’s no way to prove time when things go
> wrong, and any solution without a universal clock will fall back on
> cooperation which breaks down in the case of an attack.
The amounts here are all very low, so I don't think you really need much
more precision than "hourly". I think you could even do it "per block"
and convert "1% pa" as actually "0.2 parts per million per block", since
the only thing time is relevant for is turning liquidity DoS into an APY
figure. Presumably that needs some tweaking to deal with the possibility
of reorgs or stale blocks.
> - No honest user will be willing to pay the price for the worst case,
> which gets us back to the pricing issue.
> - There’s also an incentives issue when the “rent” we pay for these two
> weeks worst case is more than the forwarding fee, so a router may be
> incentivized to just hang on to that amount and bank it.
I think the worst case for that scenario is if you have a route
A1 -> A2 -> .. -> A19 -> B -> A20
then B closes the {B,A20} channel and at the end of the timeout A20 claims
the funds. At that point B will have paid liquidity fees to A1..A19 for
the full two week period, but will have only received a fixed payout
from A20 due to the channel close.
At 10% APY, with a $1000 payment, B will have paid ~$73 to A (7.3%). If
the close channel transaction costs, say, $5, then either you end up with
B wanting to close the channel early in non-attack scenarios (they collect
$73 from A20, but only pay perhaps 4c back to A1..A19, and perhaps $6 to
open and close the channel), or you end up with A holding up the funds
and leaching off B (B only collects, say, $20 from A20, but then A20
claims the funds after two weeks so is either up $75 if B didn't claim
the funds from A19, or is up $53 after B paid liquidity fees for 2 weeks).
_But_ I think this is an unreasonable scenario: the only reason for B to
forward a HTLC with a 2 week expiry is if they're early in the route,
but the only reason to accept a large liquidity fee is if they're late
in the route. So I think you can solve that by only forwarding a payment
if the liquidity fee rate multiplied by the expiry is below a cap, eg:
A19 -> B : wants 36ppm per block; cap/ppm = 500/36 = 13.8
B -> A20 : expiry is in 13 blocks; wants 38ppm per block
(2ppm per block ~= 10% APY)
For comparison, at the start of the chain, things look like:
A2 -> A3 : wants 2ppm per block; cap/ppm = 500/2 = 250
A3 -> A4 : expiry is in 250 blocks; wants 4ppm per block
In each case, the commitment tx would look like:
$1000 HTLC paying Y refunding to X
$0.50 liquidity fee bonus to X's balance (500ppm cap)
> - They’re not large enough to be enforceable, so somebody always has
> to give the money back off chain.
If the cap is 500ppm per block, then the liquidity fees for a 2000sat
payment ($0.60) are redeemable onchain.
> - Does everybody feel resolved on the statement that we need to take this
> hybrid approach to clamp down on jamming? Are there any “what about
> solution X” questions left for anyone? Nothing came up.
(Actually implementing the above is obviously a tonne of work --
in particular it requires every node in the route to support paying
liquidity fees eg -- assuming it doesn't turn out to be impossible, so
please don't take any of the above as an objection to using reputation
to reduce DoS vectors)
Cheers,
aj