Replying to Tavon

Hey nostr:nprofile1qqs9pk20ctv9srrg9vr354p03v0rrgsqkpggh2u45va77zz4mu5p6ccpzemhxue69uhks6tnwshxummnw3ezumrpdejz7qgwwaehxw309ahx7uewd3hkctc0c969u, I appreciate the work you do developing on top of bitcoin, but I’m confused on your explanation. Can you expand on these points you made:

1. How is op_return filtering out legitimate content? What is legitimate content in your opinion?

2. Why is helping my node predict what’s in the next block important? Is it more important than bloating the chain with images and video? Doesn’t keeping the level of entry to run a node keep bitcoin decentralized?

I'll try my best.

1. op_return was invented as a way to prevent even more destructive ways to store data. if you dig through bitcoin's history, you'll notice that in the coloerd coins / rare pepe era, people started putting data into multisig, which created unspendable UTXOs. that's just as a background. today, we're building L2s to make Bitcoin more useful as money. these need to anchor data on the blockchain and op_return is *one* way of doing that, and it's the best way of doing in in terms of minimizing damage. these L2s store legitimate data (they do not store data just for the lulz or to spam the network), but to improve bitcoin's scalability. NOTE: that this discussion is widely blown out of proportion because OP_RETURN is only economic until a pretty small payload size, but you probably know this already. nobody in their right mind would store 1 MB in an op_return if they can it in the witness, certainly not a spammer.

2. there are two important points that answer this question: fee estimation and compact block relays. fee estimation: if you don't know what the next block will look like (because you filter out transactions from your mempool), then your fee estimations will be off. if you're using an L2 that requires good fee estimation like Lightning, this even increases your risk of losing money. second, compact block filters: they minimize network data when a block is found. instead of downloading each new block from your peers, what nodes try to do is to get a summary of the block and fill in the blanks using their mempool data. in the best case, they already know all transactions and only need to download the block header etc. this reduces p2p traffic and increases block propagation speed, which in turns makes mining more competitive, and therefore more decentralized.

hope it addresses some of your quesitons.

Reply to this note

Please Login to reply.

Discussion

Thanks for being honest and coming clean that you are blowing open op return for L2s you are building.

It's a pleasure. I'm not building them btw. I just follow the discussion closely.

Cashu isn’t an L2; it’s a Chaumian ecash system with centralized mints. That’s fine if you trust the mint, but calling everything else ‘centralization’ while defending Cashu is a bit rich

My master, the bald degen weirdo want my to come clean on his behalf.

Thank you for responding nostr:npub12rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sf485vg. It did help me understand your perspective more, but it also brought more questions:

1. So I appreciate your coverage of L2s. I do want L2s, bitcoin global scalability, and anonymity to succeed. And I do want bitcoin to be used as a medium of exchange at some point. But is scalability currently an issue? How much data does an L2 need currently to store data? Are L2s being hindered by op_return default cap rn? Do you think increasing the op_return cap slowly instead of removing it all together, would be the more responsible thing to do?

I might be totally wrong here, but my understanding is that op_return was working at filtering spam (there was no cat and mouse game with spam) and then taproot unintentionally gave spammers an alternative route. Shouldn’t that be addressed by devs instead? Doesn’t this also show us that making changes to the protocol can introduce unforeseen consequences? So again doesn’t going slowly seem more logical?

2. This is starting to get past my current knowledge regarding bitcoin mechanisms so I appreciate you trying to break it down for me. You said when L2 fee estimations are off there is a risk of losing money; does this mean fees could be a few sats more, or can a whole transaction get voided? Neither is acceptable but trying to understand severity there.

I guess I don’t want mining centralization or node software development centralization. Are you saying that when a large mining pool mines a block they get a head start on the next block because other nodes have to wait for that block header to get relayed? And the relay is slowed down when there isn’t node software/rule uniformity?

Anyway this is super interesting and I appreciate your insights. I do want to get things right as a node runner.