I had to listen to the WBD episode with Mechanic twice to make sure I got this right. One of the arguments Core is making is that nodes should have more complete mempools for better fee estimation. Mechanic said that Core doesn't even use the local mempool to estimate fees. I have no way to verify this, so correct me if I'm wrong, but if that's true that is a blatant contradiction. Why would this be an argument if it doesn't even apply to their own software? That's insane.

I'm not sure if it's exactly a smoking gun, but it's a pretty huge red flag to me. I've been defending them because the idea that the OG implementation is being corrupted is almost unthinkable and I wanted to believe Bitcoin is rock solid, but even so I've still mostly advocated to not force the removal of filters on the nodes. Now I don't know what to think. I might actually have to run Knots.

nostr:nevent1qqs03sdt9t8f4jyeytjacyvrrdl5evp34zd8k24050sz0xtwn4wwawcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygpz7ghf3gp4hrle04xvf3dnfejuem9wykzrpmk2g6chk8524wu2tcpsgqqqqqqsz70lf3

Reply to this note

Please Login to reply.

Discussion

To filter or not to filter that is the question. Miners will charge more fees to mine gabage but who will compensate node runners to store that said gabage? Why even are they forced to store that gabage on their nodes? Thats my issue.

I don't want garbage on my node either, but I also want the network to run smoothly and avoid the UTXO bloat problem. That's why I was willing to consider their arguments and believe they were made in good faith. Maybe they are just looking at this from too narrow "network health" perspective or maybe they are in fact being influenced by miners or special interests, I'm not sure it matters. I am not down to open the floodgates to relaying infinite op_returns. The issues that they are trying to solve are not urgent and other solutions need to be considered. I really hope they shelve this change soon so we can start focusing on alternatives.

Your node is required to store some data to remain functional. People are abusing that to store data that is irrelevant to nodes.

It's like if someone pees in a public swimming pool, then someone else argues a pool is just a big urinal.

At least people are reviewing core. Knots is one guy with controversial opinions pushing to main branch on his own.

It's got a long running track record with no problems at least. That counts for something.

Bitcoin Core currently estimates feerates based on the time transactions of various feerates spend in the mempool before they appear in blocks. We ignore any transactions that we have not seen in the mempool.

We also have work in progress to estimate feerates based on the transactions in the mempool. https://github.com/bitcoin/bitcoin/issues/30392

Alright, so if the node is never seeing some transactions it still effects estimation even with the current method. That alleviates some of my concerns about the argument being completely fabricated.

I still think mechanic has a lot of good points though. Having perfect local fee estimation doesn't seem like it's that important when it only theoretically matters in some extreme edge use cases like justice transactions. Is it really worth opening the floodgates to relaying around infinite op_returns for slightly better fee estimation?

The main reason to do it is the same reason as 2013: it's preferable for people to write into op_return outputs than into unspendable payment outputs. Dropping the limit makes OP_RETURN a reliable replacement for the latter which is currently standard and essentially unpreventable.

Data in OP_RETURN is significantly more expensive than data in inscriptions, so I have a hard time understanding the concern that the overall block space occupied by data transactions would increase.

Whether more should have been done about inscriptions seems like a separate debate that mostly muddies the water here.

Also: more data in OP_RETURN drives out data in witness part by 3:1, so blocks would tend to get smaller again, right

Witness data is discounted by a factor four not three, but yeah, OP_RETURN data takes more blockspace and is more expensive to the same degree. Either way the sender pays for whatever blockspace they win the bid on.

Why would they use op_return to store data then if inscriptions is cheaper anyway? Out of the goodness of their hearts? Something needs to be done incentivize them to make the switch, no?

Payloads over ~140 bytes are cheaper when encoded as witnesses, smaller ones cheaper as OP_RETURN.

Mechanic says that inscriptions are an unintended consequence or a bug of Taproot. Why are we capitulating to this bug and not trying to fix it? Is Bitcoin permanently and fundamentally alerted to no longer be narrowly focused monetary network, and now it's also cloud storage for anyone willing to fork over the sats for blockspace? I get that it also enables a lot of functionality and novel scaling solutions, but it seems extremely risky to me. At least inscriptions are more onerous for the data hoarders to inscribe and reconstruct in order to access their data. If we let them put data on the Blockchain in its raw form that could be inviting even more problems than it solves.

Also I understand they can do it anyway by going OOB directly to miners. Doesn't mean we should make it easier.

That's where we disagree. We should make the benign way of writing data easier so they stop using the malign one.

It's not a bug. Dropping the script size limit is an intentional design decision. As BIP 342 says:

"Why is a limit on script size no longer needed? Since there is no scriptCode directly included in the signature hash (only indirectly through a precomputable tapleaf hash), the CPU time spent on a signature check is no longer proportional to the size of the script being executed."

You could always put data in the block chain by paying for the block space. The DoS protection was always the block size limit and blockspace market: it's expensive and people stop doing it whenever the hype has jumped the shark.

Could you elaborate how "inscriptions are more onerous for the data hoarders to inscribe and reconstruct in order to access their data"? Serialization and deserialization of data is a standard exercise that gets designed once and then used per a library. Why would they care which function they call on a library?

Could you tell me more about the concern you have when you say "if we let them put data on the Blockchain in its raw form that could be inviting even more problems than it solves"?

First I want to say thank you for taking the time with me. I am very clearly beyond the limits of my knowledge here, so if what I'm saying doesn't make sense that's probably why. I am basically a computer sciences dropout.

I don't think there's much point in me trying to answer those questions because I don't know what I'm talking about there. These little details don't really matter to me anymore anyway.

At this point I'm pretty much on board with letting the fee market decide what is the best use of limited blockspace. The monetary use is clearly the most valuable use by a longshot and will dominate so let's just get on with it. Filters seem to be mostly performative nonsense if the node still confirms whatever the miners send them. I can't imagine that's really what's been holding back a tide of shitcoins and jpegs that will destroy Bitcoin.

Thanks for taking the time Murch. I at least have no doubt about your good intentions.

You're welcome, and that's what I think mostly as well: (high-value)monetary transactions will have the highest bidding power in the long run and that's what will curb the data transactions. Thanks for being open to hearing various sources.

Yes, that's correct. Small data payloads get slightly cheaper. It just seems that people are much more concerned about big amounts of data and those are significantly more expensive with OP_RETURN. Either way, data occupies whatever blockspace the transaction bids the premium for.

Have inscriptions been a risk for 12 years? Why did they only materialize two years ago? Taproot? I'm just trying to understand the history here.

Theoretically you could have done the same reveal truck with p2sh, but the redeem script had to fit into a single 520 byte push. Segwit introduced the witness section and witness discount, Taproot removed the 520 byte limit for witness stack items. Altogether, you could then do big scripts for less in the witness.