Replying to Avatar GLACA

I’m asking this sincerely - not to argue, but because I genuinely want to learn.

I run a Bitcoin Core node. I mine solo with Bitaxe. I watch the mempool daily. I’m here because I care about this protocol and want to understand it better from all sides - including yours. But some of the current narratives around UTXO bloat and filters feel a bit disconnected from what I see.

Block size hasn’t changed - it’s still 4MB max - and no one on the other side of this debate is pushing to increase it. In fact, despite inscriptions and ordinals, we often see periods where the mempool is quiet and blocks aren’t even full. I’ve watched it myself. This tells me the protocol is far from being overwhelmed.

The UTXO set? Yes, it’s growing. I’ve read it’s around 12GB now. That’s notable - but not alarming. Most of that growth came during the BRC-20 inscription spike. Since then, it’s slowed. My node runs fine on a 1TB SSD, and if I need to upgrade to 2TB, that’s already affordable - and will only get cheaper. At current growth, that upgrade would buy me years.

So my question is - why are we treating this like an emergency?

If the blocks are capped at 4MB, and fees exist to regulate access, and pruning is available for those who can’t store everything… isn’t this system already self-regulating?

I know you care deeply about Bitcoin’s longevity and decentralization. So do I. But I also believe the idea that “whoever pays the fee gets into the chain” has always been one of Bitcoin’s strongest promises. If a group decides certain transactions aren’t worthy - even if they pay their way - doesn’t that edge toward selective permissioning?

That’s the heart of my concern. Where’s the line?

Who decides what’s “good” or “bad” usage?

When I ask these questions, I’m not trying to be difficult. I’m trying to understand whether filters are a tool - or a step toward censorship. Because I believe that Bitcoin will outlast every storage system, every government archive, and every digital library. And because of that, I see huge potential in its permanence - not just for finance, but for truth, history, and even art. Bitcoin as timechain. Bitcoin as the modern Library of Alexandria.

I know that might sound idealistic, but I say it as someone who’s using Bitcoin for those very things - and paying full fees to do so. That’s why the blanket dismissal of inscriptions as “trash” feels like it misses the nuance. Not all inscriptions are noise. Not all projects are spam. Some are attempts to preserve meaning on the most resilient ledger humanity has ever built.

If that’s not something Bitcoin can be used for, then what’s the alternative?

Censorship-prone servers? Social networks that disappear our work overnight?

Everything else is corruptible. Only Bitcoin has the permanence.

And I get it - Bitcoin wasn’t made for art. But it also wasn’t made for multi-sig or Lightning or Taproot. Use cases emerge. The best ones stick because consensus allows them to.

Which brings me to a final point: filters don’t change consensus rules - but they do shape who gets seen, what gets relayed, what feels welcome. So are we sure we’re not just replacing openness with a preference?

Because if we go down that path - of choosing what Bitcoin “should be” - we risk forgetting what made it special in the first place.

I want to get this right. That’s why I’m asking. And I’m asking as someone who wants to learn - not to win.

Blocks stay at 4 MB, but during quiet fee windows, big transactions still blast across the network, and every node must download and validate those bytes before they hit a block. The UTXO set is already 12 GB—enough to push a Pi onto slow disk seeks—so a steady drip of bulk data keeps nudging the hardware bar up. A small OP_RETURN cap is just a speed bump: it makes anyone storing non-monetary data split it into chunks or pay more, while each node can relay or ignore it. The current Core PR worries people because it raises the default cap a hundred-fold and removes the setting that lets operators keep a lower limit. I’d rather keep a modest ceiling and the knob; that lets art still pay its way, shields hobby nodes from legal risk (see Matzutt 2018 on CSA data in Bitcoin), and leaves Bitcoin’s main job—sound money—uncluttered.

Reply to this note

Please Login to reply.

Discussion

I get the concern - but the framing misses what really matters. The block size stays fixed - so storage growth is predictable. Relay bandwidth and mempool churn are transient - nodes can throttle, prune, and drop transactions as needed. The UTXO set, yes, it’s 12 GB - but it’s been stable since the inscription boom cooled off. Even so, serious node runners already spec for SSDs - slow disks were phased out by cost and necessity years ago.

As for OP_RETURN - raising the default cap doesn’t force nodes to relay or index anything. It just removes a soft bottleneck that hasn’t meaningfully filtered “spam” in years. If the data pays fees and doesn’t violate consensus, then it’s Bitcoin-native - ugly or not.

The “legal risk” argument leans speculative - nodes aren’t archiving OP_RETURN, and the Matzutt paper points to edge cases that haven’t borne out under real-world pressure. Let’s not legislate policy on theoretical terror.

If Bitcoin is truly neutral, let the market express value - whether in art, text, hashes, or coin. Censorship via knob may feel clean - but it’s just control with a prettier name.

Knots-style nodes may give some the illusion of filtering, but they don’t prevent inclusion - they simply delay it. The data still hits mempools, still gets mined, and still lives on-chain.

And even if some aren’t fans of art on Bitcoin - or of preserving fragments of human culture through permanent inscription - I’d argue we at least look at what projects like Bitmap represent. Because when we fix money, we’ll need territory - cyber territory - and it won’t be built on Solana or Arweave. It can only anchor on Bitcoin. No other foundation is equally censorship resistant, decentralized, or economically sound.

In my view, Bitmap is the most compelling project ever built using satoshis themselves. Without digital land, Bitcoin’s vision of self-sovereignty is incomplete. That’s not hype. That’s a long view of where all this is going.

All in my opinion - as someone who believes Bitcoin is more than just sound money. It’s the base layer of our future reality.

Two key assumptions behind your comfort level don’t align with Core's behavior.

First, “nodes can just throttle or drop big transactions.” The per-transaction trickle code was ripped out years ago because it broke compact-block sync; when a node tries to withhold a large tx, it simply forces a slower fallback download, using more bandwidth, not less. The only bandwidth cap left (-maxuploadtarget) is off by default, so almost every Core node forwards any standard TX immediately. In other words, raising the size limit means most nodes will move those bigger payloads for free.

Second, “raising the cap doesn’t matter because storage is cheap and pruning exists.” Pruning helps the disk after the fact but does nothing for the live relay hit or the RAM needed to hold the UTXO set. That set is already too large to fit in entry-level memory; every extra gigabyte forces more disk seeks even on SSDs. Cheap terabytes don’t fix cache misses.

Legal risk isn’t theoretical either—illicit images and links are already on the chain. An unlimited OP_RETURN lets the entire file ride in one clean chunk; a small cap forces it into thousands of random shards. That difference matters to hobby operators who can’t lawyer up or geofence nodes.

A modest default cap with the config knob intact doesn’t censor anyone. It simply makes large, non-monetary payloads pay their real network cost and leaves each node free to tighten or loosen policy without patching code.