Two key assumptions behind your comfort level don’t align with Core's behavior.
First, “nodes can just throttle or drop big transactions.” The per-transaction trickle code was ripped out years ago because it broke compact-block sync; when a node tries to withhold a large tx, it simply forces a slower fallback download, using more bandwidth, not less. The only bandwidth cap left (-maxuploadtarget) is off by default, so almost every Core node forwards any standard TX immediately. In other words, raising the size limit means most nodes will move those bigger payloads for free.
Second, “raising the cap doesn’t matter because storage is cheap and pruning exists.” Pruning helps the disk after the fact but does nothing for the live relay hit or the RAM needed to hold the UTXO set. That set is already too large to fit in entry-level memory; every extra gigabyte forces more disk seeks even on SSDs. Cheap terabytes don’t fix cache misses.
Legal risk isn’t theoretical either—illicit images and links are already on the chain. An unlimited OP_RETURN lets the entire file ride in one clean chunk; a small cap forces it into thousands of random shards. That difference matters to hobby operators who can’t lawyer up or geofence nodes.
A modest default cap with the config knob intact doesn’t censor anyone. It simply makes large, non-monetary payloads pay their real network cost and leaves each node free to tighten or loosen policy without patching code.