Avatar
openoms
aac07d95089ce6adf08b9156d43c1a4ab594c6130b7dcb12ec199008c5819a2f
Bitcoin | Lightning | Blink | RaspiBlitz on RPi and more | Self-hosting | Enjoyer of Linux Desktops and GrapheneOS | building hardware at diynodes.com
Replying to Avatar Awiteb

why?

Can't stop the algo to gaslight about the current thing.

I come to X only to unfollow people. Maybe mute too.

Nothing more motivating to hold your own keys than a hard fork lined up.

You can always exit to the terminal with CTRL+C then type: `menu`

If you have no channels might need to add a lightning peer manually to progress from the startup sync screen.

This guy is floating his RPi5 on water while it is validating the bitcoin blockchain under 4 hours.

https://x.com/L0RINC/status/1972062557835088347

Replying to Avatar El Flaco

https://pretyflaco.github.io/bitcoingovernance/

🧵 I published my perspective on Bitcoin governance.

Where does Bitcoin derive its censorship-resistance from? And how do we preserve it?

Sharing some frameworks I've found helpful 👇

The main insight: Bitcoin's unique value comes from censorship-resistance, and the two magic ingredients to achieve it are:

✅ Free users

✅ Free software

-> Censorship-resistance requires free users

-> Free users require free software

Traditional governance asks "who decides?"

These systems are easy to co-opt and can therefore never preserve censorship-resistance.

Bitcoin governance asks "how do we preserve free users?", and this is they key to preserving censorship-resistance.

For developers, this means:

-> Optimize for user agency

-> Default to user choice over "optimal" outcomes

Remember: we're building infrastructure for freedom.

These are just my current thoughts - governance philosophy evolves with experience - but it's clear to me that censorship-resistance is a derivative of free users.

There is a leap of faith required in bitcoin’s governance model.

We must trust that when users are given genuine freedom and access to good information, they will make choices that preserve what makes bitcoin valuable.

Users who want bitcoin to remain censorship-resistant will choose software and rules that maintain that property.

Users who prefer other properties may make different choices, but the network’s evolution will reflect the aggregate of all these individual decisions.

When we take the leap and trust that free users will make the right choices, Bitcoin succeeds.

I would love to hear perspectives from builders, users, and researchers working on or thinking about these problems.

Take a step back and think through how the rules of Bitcoin are coming to be.

This article provides valuable insight with a fresh perspective on the topic: https://pretyflaco.github.io/bitcoingovernance/

nostr:nevent1qvzqqqqqqypzqnlms75hfwa49l9vwdahns54cprajxkwmzfrkzu93hmu453gz9tlqqsypt0plwheeawsxtnmpfytjc4wf3ww97a0a2cu3m00tyg3mjj3qsgv2mg0v

Replying to Avatar Super Testnet

> It turns node runners into network police

It doesn't turn them into "network police" because they aren't policing "the network" (other people's computers) but only their own. I run spam filters because I don't want spam in *my* mempool. If other people want it, great, their computer is *their* responsibility.

> constantly define and redefine what constitutes spam

It doesn't constantly need redefinition. Spam is a transaction on the blockchain containing data such as literature, pictures, audio, or video. A tx on the blockchain is not spam if, in as few bytes as possible, it does one of only two things, and nothing else: (1) transfers value on L1 or (2) sweeps funds from an HTLC created while trying to transfer value on an L2. By "value" I mean the "value" field in L1 BTC utxos, and by "transferring" it I mean reducing the amount in that field in the sender's utxos and increasing it in the recipient's utxos.

> Data-embedding techniques will simply evolve to bypass the latest filters

And filters will simply evolve to neutralize the latest bypass. They cannot withdraw this race if the filtered are more diligent than the spammers.

> [What if they] make their transactions technically indistinguishable from "normal" financial ones

Then we win, because data which is technically indecipherable cannot be used in a metaprotocol. The spammers lose if their software clients cannot automatically decipher the spam.

If the spammers develop some technique for embedding spam that can be automatically deciphered, we add that method to our filters, and now they cannot use that technique in the filtering mempools. If they make a two-stage technique where they have to publish a deciphering key, then they either have to publish that key on chain -- which allows us to detect and filter it -- or they have to publish it off-chain, which is precisely what we want: now their protocol requires an off-chain database, and all of their incentives call for using that database to store more and more data.

I appreciate the detailed response, but in these points we are in disagreement:

1. Policing your node vs. "the network": Framing this as only policing your own node overlooks the network externalities. Your filtering directly impacts the efficiency of block propagation for your peers. It turns an individual policy choice into a network-wide cost.

2. Your definition on what transactions should be allowed: The proposed definition of "spam" is not a filtering policy; it's an argument for a hard fork. The current Bitcoin consensus explicitly allows these transactions, and has for years. To enforce your narrow definition network-wide, you would need to change the fundamental rules of the protocol. This brittle definition would not only freeze Bitcoin's capabilities but would also classify many existing financial tools from multisig to timelocks and covenants as invalid. The arbitrary exception for L2 HTLCs only proves the point: you're not defining spam, you're just green-lighting your preferred use cases.

3. The arms race is asymmetric: This isn't a battle of diligence; it's a battle of economic incentives. There's a powerful financial motive to embed data, but only a weak, ideological one to filter it.

4. You're underestimating steganography: You're focused on overt data, but the real challenge is data hidden within what looks like a perfectly valid financial transaction. A filter cannot distinguish intent. To block it, you'd have to block entire classes of legitimate transactions that are valid under today's consensus, which is a non-starter.

Replying to Avatar Super Testnet

> If a node is blind to a large segment of the real mempool, wouldn't it be slower to detect a sudden spike in the fee market, potentially causing it to fall behind in a fee-bumping war?

A fee-bumping war? I think I need more context. I am not aware of any real-world software that requires users to competitively bump their fees. Are you saying there *is* such a protocol? Are you perhaps referring to the LN attack where the perp repeatedly RBF's an input to a tx that creates old state?

Even there, the would-be victim doesn't have to repeatedly RBF anything. Instead, he is expected to repeatedly modify his justice tx to use the txid of whatever transaction the perp creates after *he* (the perp) performs an RBF. The victim *does* have to set feerate each time, but his feerate does not compete with his opponent's, as his opponent is RBF'ing the input to the justice transaction's *parent,* whereas the victim simply sets the feerate of the justice transaction *itself,* and he is expected to simply set it to whatever the going rates are.

Moreover, as mentioned above, I think it's absurd to expect a real-world scenario where Knots reports a too-low feerate for 2016 blocks in a row, despite getting information about the current rates from each of those blocks *as well as* its mempool. For that to happen, spam transactions would have to be broadcast at a completely absurd, constantly-increasing rate, for 2016 blocks in a row, with bursts of *yet further* increased speed right after each block gets mined (otherwise the fee estimator would know the new rate because it shows up in the most recent block), and the mempool would *also* have to go practically unused by anything else (otherwise the fee estimator would know the new rate when it shows up in the non-spam transactions that compete for blockspace with the spam transactions).

> On the other points we are also left with the problem that the network communication is breaking down because more nodes are rejecting the very transactions that miners are confirming in blocks.

This communication problem can be summarized as, "compact blocks are delayed when users have to download more transactions." I think driving down that delay is a worthwhile goal, but Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks. If you want to do that, have at it, but it looks to me like a huge number of former Core users are saying "This isn't worth it," and they are opting for software that simply doesn't do that. I expect this trend to continue.

Fair point regarding the fee estimation and appreciate the detailed breakdown. I gladly accept that the fee estimation is robust enough even with a partial view of the mempool.

> Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks.

The choice isn't between a cure and a disease (purity vs. efficiency), but about upholding network neutrality. The Core policy relays any valid transaction that pays the fee, without making a value judgment.

The alternative - active filtering - is a strategic dead end for a few reasons:

- It turns node runners into network police, forcing them to constantly define and redefine what constitutes "spam."

- This leads to an unwinnable arms race. As we've seen throughout Bitcoin's history, the definition of "spam" is a moving target. Data-embedding techniques will simply evolve to bypass the latest filters.

- The logical endgame defeats the purpose. The ultimate incentive for those embedding data is to make their transactions technically indistinguishable from "normal" financial ones, rendering the entire filtering effort futile.

Less valid transactions in my mempool will make my node unreliable in predicting the next block and estimate fees, especially in extreme cases where it could be critical.

Not relaying is the same as not running that node for those transactions, does't stop anyone else.

The filtering node slows down it's own to verify blocks - will be later to reach the tip, will waste hashrate in that time if mining.

Only the fastest route counts so even a supermajority would not be signficant.

I just wonder what can be the endgame here?

Filterers want to stop transactions they don't like, but no penetration of filters can prevent a small fraction of nodes to relay non-standard transactions and miners to directly accept them.

Ocean is gathering hashrate.

When hard fork?

Increasing the OP_RETURN limit to match what can already be included in a valid block is like placing a garbage bin to a littered street.

Can't stop people from littering and can't even make them to put rubbish in the bin, but can at least provide them with a less bad path.

OP_RETURN outputs are paying a full price, not stored in the chainstate and are prunable from the downloaded data.