No one has been able to answer this

What happens if Knots becomes the new reference?

What's the worst case scenario?

Isn't it just Core with more options?

What's the concern here?

#asknostr

Reply to this note

Please Login to reply.

Discussion

Dude I’m so lost on this, too. From what I can tell even if you use Knots you still have to download everything anyway. You can’t filter out something you don’t know about.

It really comes down to what the Miners are doing with the blocks not what node runners are doing.

I don’t fuckin’ know lol

So from what in understand (very little) filters would prevent it propagating from your node. So if there's enough filtering it won't reach miners (fast enough) to be put into a block. I think

Also not sure specifically if its the case with knots but if you're filtering by size, I think you could filter before the whole data is downloaded I believe its how some email filters. Work

When filtering reaches sufficient density, spam cannot easily propagate through the mesh. Each node limits its edge connections, so once filters hit critical mass, spam traffic becomes effectively unusable. At that point, spammers must pay a premium to mining pools for inclusion. Mining pools that accept spam will see their reputation degrade, causing them to lose hashpower. This is market discipline in action. ā€œNo shoes, no shirt, no serviceā€ is not censorship, it is the minimal standard of decorum required to participate in the market.

Additionally, if miners despite the network not relaying this trash includ it in the blockchain, they risk their blocks to be orphaned because they will propagate much slower through the peer-to-peer mesh because the transactions are not known to all the nodes and they need to be requested, which slows those blocks down. More honest miners will win.

Ok so when I hear the core people say that the filter doesn't work I think their steelman argument is that because the filters can't provide 100% filtering. Miners will eventually mine some of the spam into a block which will have to be downloaded... Is that right?

That is correct, but that doesn't mean filters don't work. So far, 95% or more of the transactions adhere to the filters imposed by the peer-to-peer Bitcoin network, which provably shows that the filters work. Their goal is not to fully sensor transactions, their goal is to rate limit transactions so not to fuck up the experience for everybody else. It's really not hard to understand. Core's arguments are just dishonest and can be falsified by empirical data from everybody's bitcoin blockchain.

Just curious, @germanhodl are you a bitcoin dev or knots dev?

No, I'm just a random pleb. But I work in software development, and developers in general are complete autists with tunnel vision. You should not listen to them; just tell them what to do šŸ˜‰

This is the problem I think yea.

Like I said I'm not smart enough to get all the technical details, but I'm trying to get their steel man case.

Can you explain how they think it helps with miner centralization?

Or how they guy who made damus thinks its removing an unneeded software component is thinking?

They think by removing the filters and relaying all the trash around, smaller miners will have a better chance competing with big miners, who offer out-of-band services like slipstream or decide against network policy to mine trash relayed to them regardless. They hope that by doing it small mines will have access to the same TX and are not disadvantaged by blocks relayed to them where they need to fetch the transactions because they filtered out those transactions in the first place (SPAM).

Unfortunately, this is a problem created by minor centralization in the first place and will only deepen minor centralization. A healthy filtering peer-to-peer Bitcoin network will punish those big miners by slowing their trashy blocks propagation through the network and rewarding honest miners who follow the same mempool policy as most of the nodes.

How dare you ask this question. We need to "listen to the professional software developers." Oh but wait, what happens when they disagree? Oh, well then we should listen to the "majority of the main group" because that always works out.

The best case scenario is the notion of having ā€œone codebaseā€ cured with a new paradigm in source control management using Nostr. Not just a different backend on Git, but an entirely new way of managing pull requests and the review process. This paradigm uses `npub`-based web-of-trust and PRs that allow merges with arbitrary forks. Rather than trusting a single person with ā€œthe keysā€ to the repo, trust becomes divergent - as should the codebase itself - while harmony is achieved through automated functional testing against specifications.

This way, the end-user selects which features are included, and the forking, building, and validation process becomes trivially implemented through scripting. It achieves similar selectivity to `kconfig` and `kmod`, where every kernel can differ for its purpose, but here applied across general software.

## Core principles

1. **Divergent trust, convergent behavior.** No canonical repo. Interop comes from conformance to specs and test suites, not from custodial maintainer keys.

2. **Identity = keys (npub).** Web-of-Trust (WoT) computes reputation locally; every attestation is signed.

3. **Artifacts are content-addressed.** Code, specs, tests, build recipes, and results are blobs addressed by hash; Nostr carries signed metadata and references.

4. **Policy-driven selection.** Each user/organization runs a policy that selects which forks, features, and patches compose their build.

5. **Everything is attestable.** Reviews, CI results, SBOMs, and releases are signed events; no ā€œstateā€ exists off-ledger.

## High-level architecture

- **Storage layer:** content-addressable (CAS over IPFS/torrent/object store). Nostr events store hashes/URIs; data stores serve blobs.

- **Nostr event schema:** new event kinds to model repos, modules, specs, PRs, reviews, CI attestations, releases, and WoT edges.

- **Module graph:** repo ⇒ modules ⇒ units (packages/libraries/plugins). Directed acyclic graph keyed by `(module_id, version_hash)`.

- **Profiles:** kconfig-like ā€œfeature profilesā€ that select modules/variants and set version constraints.

- **Policy engine:** given a profile, WoT weights, and attestations, resolve a build plan deterministically.

- **CI/Validation network:** untrusted builders publish *signed* result attestations (pass/fail, logs, SBOM, reproducible build proofs).

- **Relay/rate limiting:** relays enforce anti-spam using blind-signed tokens and per-npub quotas; attestation relays may be specialized.

## What this enables

- End-user chooses features; builds are trivially scriptable and hermetic.

- PRs become portable, testable *offers*—not pleas to gatekeepers.

- Fork explosion becomes an asset; convergence emerges from tests + policy.

Not sure what you mean by reference? Majority client used by node runners? If even one miner uses v30 and adds excessive data in the op return on a won block, that lives forever on everyone’s nodes (which I think can already happen with slipstream), unless there a hard fork initiated retroactively, I think.

Knots is maintained and gatekept by one person. Not that core is much better (five maintainers) but still better than one.

There are only currently five people who can actually push code in the Core GitHub repository, but there are many more who review and contribute. Portraying Core as just five people is inaccurate.

I’m getting the feeling the concern might be the dev’s funding- sponsorships and grants.