Avatar
Nuh
930ccef12372dd2f16057cfc54f0dbd94335d8b51b4e2737236b00cab718fcd9
Working on https://mlkut.org, designer and maintener of https://pkarr.org. https://nuh.dev
Replying to Avatar namosca

You might be interested on hash function Blake3, which enables parallels Processing and native Torrenting and realtime streaming:

https://en.m.wikipedia.org/wiki/BLAKE_(hash_function)#BLAKE3

"In addition to providing parallelism, the Merkle tree format also allows for verified streaming (on-the-fly verifying) and incremental updates.[27]"

"BLAKE3 is a single algorithm with many desirable features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. BLAKE3 has a binary tree structure, so it supports a practically unlimited degree of parallelism (both SIMD and multithreading) given long enough input. The official Rust and C implementations[27] are dual-licensed as public domain (CC0) and the Apache License.[28]"

We use Blake3 in Pubky, but it has little to do with the topic at hand; DHTs

Don't you need multiple service providers to hold your keys and sign on your behalf, basically multiple signers, these are services.

I can't imagine the UX of asking normal people like myself to sign up to multiple providers and judge who deserves my trust and who don't

πŸ˜… I am not concerned with collusion because I don't even think I can get a single human being to signup yo multiple services to do just one thing, I can't even convince myself to bother.

Replying to Avatar Constant

The proces on https://join.the-nostr.org/ (just a demo, do not use it) is rather smooth, only that this does not explicitly give you the underlying privatekey, but it is actually stored locally so trivial to add.

You 'need' an active signer only if you want to be part of the multisig. Not sure how usefull that actually is, you only reduce the trust a tiny bit. Its still a threshold signature so collution is still possible so at the end of the day you still trust those that you made part of the FROST.

So no, you dont need to be an active signer, you just get a bunker link and you are off. If the link gets compromised you can ask the signers to stop signing and create a new FROST based on the same key with the same resulting Npub

so let me get this straight, you hire more nsecbunkers instead of just one? I am not going to pass judgment on that I promise I just want to know what is, and will keep mu opinions to myself this time.

Of course, relying on DNS is a perfect way to scale.. the question is do you want to use ICANN as the root server, or millions of nodes in an overlay network that has been actually winning censorship fights for more than a decade.

Fiatjaf keeps insisting that DHTs don't work regardless of how many proofs I show him. He also would rather make up a bespoke tiny DHT than use Mainline because he thinks breaking compat with secp would kill Nostr.

I can't say it won't.

But if you can support nip5... you can support Pkarr. Or don't I am in this for the system design conversation, more than any desire to change how any existing systems or products work.

Here is Kademlia algorithm:

1. you ask the closest nodes you know about, about a key.

2. some will answer either data, some will respond with the even closer nodes they know about.

3. you repeat step 1 for the nodes you were told about from step 2 until you run out of nodes you haven't visited already.

You can implement this in a month in a new programming language that you are still learning. Better yet you can graph one of the 100s of implementations and get access to millions of nodes.

Now, what is the simplest algorithm for the Outbox model if I am trying to dind the Outbox relay of a key that hasn't published their profile to a relay I am already connected to?

Is it only simple when we all have one relay in common? Because everything is simple when there is one central reliable source of truth.

nostr:note1ft33hptz4csg4p7kmdvx692fra5m07ufz5ye5mtyrngqz2s6rzuqc6gvs8

Good then you explicitly don't want the global discoverablity, and it is very easy for you to claim decentralisation.

And I won't even bother asking what is the difference then between relays vs how Matrix does groups vs RSS feeds etc.. because that is a matter of taste.

As long as the message is clear and the promise of censorship resistant reach is not actively pushed nor passively encouraged l, there is nothing to argue against.

Replying to Avatar jb55

true

Who said anything about difficulty.. it is complex, as in there are too many things you have to reason about to achieve any acceptable level of reliability compared to DNS.

Maybe I am totally wrong, but my ignorant assumption is, the moment you and your target don't happen to use the same relays, and you have to go find their profiles somehow else, you immediately start looking at a very much worse algorithm than Kademlia...and a one with way more assumptions most of them about social graphs ... compare that to Kademlia and DNS.

Anyway, we both options are available and if either has enough demand it will be used.

Compared to a DHT? especially one that you don't have to implement because it is already implemented plenty and you only need one function call to find exactly where the endpoint is because the user authoritatively said so?

Outbox model not at least an order of magnitude harder to deal with? Forget about the happy path when both you and your target use the same relay set... let's talk about what happens when you don't and you have to find the relays they published their profile on.

Just because the delivery mechanism is 1-to-1 doesn't mean it is impossible to be 1 to many.

But, if you are interested in reality whatsoever, you have to acknowledge that many-to-many is an intractable problem and anyone says otherwise wither dishonest or didn't care to think it through.

Nostr itself started with this realisation (watching p2p systems) and relays are intended to be a layer with fewer nodes so that data can converge before diverging.

You can call that sufficiently decentralised, but it is not any different from Bluesky claiming that there can be tens of relays. Nostr didn't yet show how is there going to be tens of relays offering that global view to everyone when "everyone" means 100s of millions.

so my point stands, global discoverability is impossible to make decentralised, the best you can do is give up on either decentralisation or global view.

If you give up on global view, then you are just talking about rooms, you call them relays, but they are just big rooms.

This is not a criticism of Nostr or Bluesky, it is just an attempt of breaking the deliberate tendency to ignore physics when talking about global discovery... people just decide that the impossible is possible and refuse to engage with reality ... not good.

I hate the word "social media" because it offers no indication of the architecture whatsoever... email is not the same as group chat which is not the same as globally searchable low latency feeds.

Even worse, almost everyone the latest when they say "social media" but immediately start pretending to mean smaller scale and more tolerance to high latency as soon as confronted with the impossibility of making a global searchable low latency feed decentralised.

It reminds me alot of shitcoiners refusing to accept that big and rabidly mined blocks are impossible to decentralize, ans that it is censorship resistant only to the extent that legal arbitrage is working on confused legislators who can't see the centralisation at hand.

ActivityPub is starting to sound wiser than I ever gave it credit before.

Search and global feeds (just another word for a saved search query) can NOT be censorship resistant, it certainly can be decentralised and permissionless and maybe even somewhat affordable to small competitors, but certainly not cheap enough to be censorship resistant.

That's OK, cozy is better.

Stop supporting ethnostates, stop contributing to this never ending nightmare

Collaborative interoperability is larping, the only interoperability that is worth anything is opportunistic interoprrability (or flat out reverse engineering) to steal other clients market share.

That works most clearly with Files because they escape the incumbents walled garden.

The goal is keeping all data on user controlled data stores, and the rest will come naturally from ruthless competition. Just reserve the decline of the web and rise of private databases.

Protocols and Improvement proposals don't socially scale, the entire web can't sit down and discuss how to shape every type of data and every client behaviour. And that is OK.

You are the experts, but if Alby can do what brave does for IPFS urls, I.e intercept urls that have Pkarr as TLDs, and route the cleaner endpoint while keeping the address bar showing the original URL, that would defined be fun.

we can ship Rust or wasm code that takes pubky or pkarr url and give you back either a normal domain or an ip and port. in fact if you can Rust code in extensions, then we can do TLS too that doesn't depend on CAs.

but I don't know enough about extensions limitations.

Pkarr was used by 2 external teams before it was used internally in pubky, in fact before the name pubky was chosen.

Good ideas don't need much advertisement, but they definitely deserve advocacy. If you know you have a good idea, reach out to smart people and tell them about it.

I get paid whether or not I talk about my work here... I just enjoy talking about it for anyone who would listen.

If you don't want people to hear about your work, work on something different.

You are failing miserably, I know smart harsh criticism when I see it, all you are saying is fuck you, that is not a truth that hurts... get gud.

Replying to Avatar hodlbod

But that's the most extreme version of the argument. Why not make pkarr domains optional like this: https://github.com/nostr-protocol/nips/issues/1548

About home servers, I would say strong consistency doesn't scale. Even centralized web companies know this, and have embraced eventual consistency.

Is this our job though spending our time begging on Nips repo?

we built Pkarr, we showed it to everyone for many months, and we're told it was unnecessary.

Pkarr is still there and keeps getting better. Anyone is welcome to use it.

And for strong consistency, I meant it in the Write sense, meaning, in Nostr you can't do lists or counters safely, on Homeservers it is easy.

Another way to say it is: single writer. Think how LMDB offers great transactional safety because it just rejects lock-free multiwriter paradigm.

And if you believe that most users read way more often than write that is a reasonable tradeoff (maybe slower writes or occasionally unavailable writes) for the sake of having stuff like consistent pagination and exclusion checks and maybe versioning.

Saying that that doesn't scale is like saying having only one Git remote doesn't scale, of course it does.