Replying to Avatar Super Testnet

> If a node is blind to a large segment of the real mempool, wouldn't it be slower to detect a sudden spike in the fee market, potentially causing it to fall behind in a fee-bumping war?

A fee-bumping war? I think I need more context. I am not aware of any real-world software that requires users to competitively bump their fees. Are you saying there *is* such a protocol? Are you perhaps referring to the LN attack where the perp repeatedly RBF's an input to a tx that creates old state?

Even there, the would-be victim doesn't have to repeatedly RBF anything. Instead, he is expected to repeatedly modify his justice tx to use the txid of whatever transaction the perp creates after *he* (the perp) performs an RBF. The victim *does* have to set feerate each time, but his feerate does not compete with his opponent's, as his opponent is RBF'ing the input to the justice transaction's *parent,* whereas the victim simply sets the feerate of the justice transaction *itself,* and he is expected to simply set it to whatever the going rates are.

Moreover, as mentioned above, I think it's absurd to expect a real-world scenario where Knots reports a too-low feerate for 2016 blocks in a row, despite getting information about the current rates from each of those blocks *as well as* its mempool. For that to happen, spam transactions would have to be broadcast at a completely absurd, constantly-increasing rate, for 2016 blocks in a row, with bursts of *yet further* increased speed right after each block gets mined (otherwise the fee estimator would know the new rate because it shows up in the most recent block), and the mempool would *also* have to go practically unused by anything else (otherwise the fee estimator would know the new rate when it shows up in the non-spam transactions that compete for blockspace with the spam transactions).

> On the other points we are also left with the problem that the network communication is breaking down because more nodes are rejecting the very transactions that miners are confirming in blocks.

This communication problem can be summarized as, "compact blocks are delayed when users have to download more transactions." I think driving down that delay is a worthwhile goal, but Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks. If you want to do that, have at it, but it looks to me like a huge number of former Core users are saying "This isn't worth it," and they are opting for software that simply doesn't do that. I expect this trend to continue.

Fair point regarding the fee estimation and appreciate the detailed breakdown. I gladly accept that the fee estimation is robust enough even with a partial view of the mempool.

> Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks.

The choice isn't between a cure and a disease (purity vs. efficiency), but about upholding network neutrality. The Core policy relays any valid transaction that pays the fee, without making a value judgment.

The alternative - active filtering - is a strategic dead end for a few reasons:

- It turns node runners into network police, forcing them to constantly define and redefine what constitutes "spam."

- This leads to an unwinnable arms race. As we've seen throughout Bitcoin's history, the definition of "spam" is a moving target. Data-embedding techniques will simply evolve to bypass the latest filters.

- The logical endgame defeats the purpose. The ultimate incentive for those embedding data is to make their transactions technically indistinguishable from "normal" financial ones, rendering the entire filtering effort futile.

Reply to this note

Please Login to reply.

Discussion

> It turns node runners into network police

It doesn't turn them into "network police" because they aren't policing "the network" (other people's computers) but only their own. I run spam filters because I don't want spam in *my* mempool. If other people want it, great, their computer is *their* responsibility.

> constantly define and redefine what constitutes spam

It doesn't constantly need redefinition. Spam is a transaction on the blockchain containing data such as literature, pictures, audio, or video. A tx on the blockchain is not spam if, in as few bytes as possible, it does one of only two things, and nothing else: (1) transfers value on L1 or (2) sweeps funds from an HTLC created while trying to transfer value on an L2. By "value" I mean the "value" field in L1 BTC utxos, and by "transferring" it I mean reducing the amount in that field in the sender's utxos and increasing it in the recipient's utxos.

> Data-embedding techniques will simply evolve to bypass the latest filters

And filters will simply evolve to neutralize the latest bypass. They cannot withdraw this race if the filtered are more diligent than the spammers.

> [What if they] make their transactions technically indistinguishable from "normal" financial ones

Then we win, because data which is technically indecipherable cannot be used in a metaprotocol. The spammers lose if their software clients cannot automatically decipher the spam.

If the spammers develop some technique for embedding spam that can be automatically deciphered, we add that method to our filters, and now they cannot use that technique in the filtering mempools. If they make a two-stage technique where they have to publish a deciphering key, then they either have to publish that key on chain -- which allows us to detect and filter it -- or they have to publish it off-chain, which is precisely what we want: now their protocol requires an off-chain database, and all of their incentives call for using that database to store more and more data.

I appreciate the detailed response, but in these points we are in disagreement:

1. Policing your node vs. "the network": Framing this as only policing your own node overlooks the network externalities. Your filtering directly impacts the efficiency of block propagation for your peers. It turns an individual policy choice into a network-wide cost.

2. Your definition on what transactions should be allowed: The proposed definition of "spam" is not a filtering policy; it's an argument for a hard fork. The current Bitcoin consensus explicitly allows these transactions, and has for years. To enforce your narrow definition network-wide, you would need to change the fundamental rules of the protocol. This brittle definition would not only freeze Bitcoin's capabilities but would also classify many existing financial tools from multisig to timelocks and covenants as invalid. The arbitrary exception for L2 HTLCs only proves the point: you're not defining spam, you're just green-lighting your preferred use cases.

3. The arms race is asymmetric: This isn't a battle of diligence; it's a battle of economic incentives. There's a powerful financial motive to embed data, but only a weak, ideological one to filter it.

4. You're underestimating steganography: You're focused on overt data, but the real challenge is data hidden within what looks like a perfectly valid financial transaction. A filter cannot distinguish intent. To block it, you'd have to block entire classes of legitimate transactions that are valid under today's consensus, which is a non-starter.

> Framing this as only policing your own node overlooks the network externalities

If I set up a fence around my house, that has neighborhood externalities. My neighbors can't see one another by looking across my lawn, for instance. But framing "anything with externalities" as "policing" the people it has an effect on is problematic. Just as it is not my responsibility to ensure that my two neighbors can see one another across my lawn, it is also not my responsibility to ensure that miners get their blocks to my peers quickly. I may decide to help some or all of them do that; but even if I do make such a decision, it is not as if that puts me in some position of responsibility where I cannot now apply filters to transactions that I *don't* want in my mempool and *don't* want to assist with.

> Your definition on what transactions should be allowed...is not a filtering policy; it's an argument for a hard fork.

Those things are not incompatible. One could theoretically propose something as a mempool filter *and* as a hard fork; the nice thing about mempools is, they do not require consensus to modify, so you can just do it. Whereas a hard fork is very hard precisely because unless you get a whole bunch of people to agree with you (i.e. get consensus) you end up just creating an independent network (not that there's anything wrong with that, unless you start scamming people with it)

If I *did* propose this for a fork, it would be a soft fork, not a hard one, as it would require *tightening* the rules, not loosening them. But it would have some of the same *effects* as a hard fork if it was contentious, because contentious softforks (theoretically) split the network into incompatible branches just like hard forks do.

That said, while I don't want to entirely close the door on a soft fork, I think it is wise, for the aforesaid reasons, to just do it in my own mempool, and tell other interested people (if any) how to do it in theirs -- because I don't need consensus for that and I get all the benefits I seek as a result and I also make less people mad.

> The current Bitcoin consensus explicitly allows these transactions, and has for years. To enforce your narrow definition network-wide, you would need to change the fundamental rules of the protocol.

Nice that I don't *want* to enforce my definition network-wide, then. But we've been over similar ground moments ago; perhaps you think that slightly slowing block propagation speed among my peers counts as "enforcing my definition network-wide." If so, I disagree, and I'd particularly like to highlight that this slowdown doesn't even affect how fast my *peers* receive a block unless *my* connection with a given peer would otherwise be their fastest available connection. (If they've got peers A, B, and Me, and peer A is faster than me anyway, then it doesn't matter that I have to download some transactions before serving them a block -- they were gonna get the block faster from peer A anyway.) And, in my personal case, I very much doubt that I am anyone's fastest connection, as I personally operate on pretty bad wifi that I find in hostels and airbnbs.

> This brittle definition would not only freeze Bitcoin's capabilities but would also classify many existing financial tools from multisig to timelocks and covenants as invalid

If applied as a fork, yes, but that's not what I want to do. I only want to apply it in my own mempool, by not speeding along transactions that I find stupid. Good point about multisigs and timelocks, though; if I ever get around to implementing my preferred filter I will try to ensure it allows those, as I do want to help relay such transactions around on the network.

> The arbitrary exception for L2 HTLCs only proves the point: you're not defining spam, you're just green-lighting your preferred use cases

Greenlighting use cases that I "prefer" -- as in, want to see more widely adopted -- is precisely what I want to do in my own mempool. I don't want to help people spam the network; I want to help them adopt layer 2s and sometimes use L1 as money. So I want a filter that supports the latter things -- the things I like and want in my mempool -- and locally blocks the other things -- the things I don't like and don't want in my mempool.

> The arms race is asymmetric: This isn't a battle of diligence; it's a battle of economic incentives. There's a powerful financial motive to embed data, but only a weak, ideological one to filter it.

I suppose a similar thing is true of email spam: the motive to get email spam in front of many eyeballs is more powerful than my motive to block it from my inbox. Nonetheless, email filters are powerful enough to largely compensate for that asymmetry, and I'd like to help design mempool filters that offer similar compensation.

> You're underestimating steganography: You're focused on overt data, but the real challenge is data hidden within what looks like a perfectly valid financial transaction. A filter cannot distinguish intent. To block it, you'd have to block entire classes of legitimate transactions that are valid under today's consensus, which is a non-starter

Filtering entire classes of transactions that are valid under today's consensus is what this entire debate is about. I am enthusiastically in favor of doing so in my local mempool, and sharing what works with others who may have similar interests.