There's no point arguing over what you consider a moral argument then.
Oh, I think I see your argument. If you run with stricter data carrying policy, you have a higher chance of relaying lower fee transactions to Ocean's mining pool. Is that really it?
Why rub with a mempool at all then?
Paying to a fake key or hash cannot be controlled by policy, no matter how aggressive you get. You might say this is expensive, but not by much more than embedding data in fake p2ms (which is controllable by policy). Is there a point to forcing people into the worst class of data embedding in terms of node resource consumption?
Every node has to have the same consensus rules, we are debating policy here, which only effects the mempool and may indeed be different. If the data carrying transaction propagates no matter if it is switched on or not, and the transaction does not incur disproportionate cost to validate and relay, there is little point in not relaying it, since it is latest reveived and processed by the node by the time it is mined.
It could dampen future growth of the UTXO set, which I think is very important.
Yes, there is a bit more to it. The data in OP_RETURN outputs has the least impact on node runners, since it is not added to the UTXO set.
> The argument for "ACK" seems to center around disincentivizing using the p2p network and mempool to convey proposed transactions into the block template.
How are you coming to this conclusion? The exact opposite of that is the goal here.
I think Ocean's market share is too small to have an effect here, though obviously that might change. They also use a different client already, so why should Bitcoin Core maintain something that they don't even directly use?
Cash on the internet
I agree with your last post here. The thinking of most core developers is that there is no point in maintaining an option that effectively achieves nothing. It is compared with a placebo. If transactions make it into blocks anyway, and there is no physical resource consumption downside, what is the point of keeping them out of the mempool?
Indeed, there might be a qualitative change, but no quantitative one for node runners. The change under discussion at the moment will probably not lead to a big change in data embeddings in the chain though. Stamps, brc-20, and inscriptions will porbably not move their infrastructure to a potentially more expensive OP_RETURN setup. Bare multisig, and emebedding data in the witness will probably remain more efficient for them. The positive hope of this change is that future protocols that need to anchor data in the chain every so often, will do so with in a prunable, and non-UTXO set polluting way. They will embed the data anyway, but changing the OP_RETURN limits might reduce harm.
Segwit has not opened the door for that. It was always possible by replacing public keys with data in transaction outputs.
If the assumption is that blocks will be full, which is the assumption we should be operating under, this is not true. Blocks can't get fuller than full, because of a change to policy.
We should always calculate future resource burdens on nodes with full blocks, if the premise is that we want Bitcoin to be used. Changing policy does not change this calculation.
There is no such thing as "broader consensus". There is consensus and there is policy. There is no fuzzy line in between them. Both are part of node software and policy at least being somewhat common between nodes is good for the health of the network to reduce latency and p2p traffic, but there is no consensus mechanism involved and it is not crucial. You can run a full node without a mempool at all. Maybe you are referring to what might be called the "social consensus" within the broader community of node runners to adopt similar policy rules?
I think you are confusing things here. Block validation does not have much to do with policy. Every node has to run the same block validation checks or risk inadvertently forking itself off, or getting forked off by an attacker feeding it a specially crafted block, and falling out of consensus. My choice as a user would be running Bitcoin Core, but I could also be running btcd, or libbitcoin, as long as they implement the exact same checks in their validation logic. Bitcoin Knots just re-uses the exact same block validation logic that Bitcoin Core does, so there is no difference there. We all have to apply the same logic, or risk falling out of consensus.
The difference that policy makes in the case where a Bitcoin Core and Bitcoin Knots client enforces different policy rules is that if a transactions enters a Bitcoin Core client's mempool, and is then included in a newly mined in a block, Bitcoin Core does not have to re-request the transaction from a peer and can validate the block with the transaction it already has. If Bitcoin Knots on the other hand does not have the transaction yet, it requests it from a peer first, and then validates the block with it. In the end, the result is the same. Both nodes validate the block in the same manner, persist the same block of transactions to disk, and have the same view of the set of unspent transaction outputs. Applying the same block validation rules is the consensus all nodes have to maintain with each other and policy does not really play into that.
The result of checking a block's validity in Bitcoin Core. This includes validating its header, merkle roots, and transactions, spending its coins, checking its coinbase, and protecting against duplicates.
What mechanism is used for this consensus? If everybody has their own pool, how is this building consensus? My understanding of Bitcoin is consensus is built through proof of work and block validation.