Someone tried to blackpill me today. But I coughed it up and spit it out.

The blackpill was that decentralized systems can't innovate because it is too hard or impossible to make breaking changes. Centralized systems like facebook can just innovate without permission or compatibility and so they will always innovate much faster, and so decentralized system can never keep up and will never compete with them.

I partially agree. Yes, centralized systems can innovate faster. Yes, it might always be that most people will be on the centralized systems.

But where I disagree is this: Centralized systems keep letting us down. And some of us are happy enough to use a decentralized system for a subset of our social media, to have at least some level of reliability and trust that we can depend on.

Moxie Marlinspike (fittingly named after a knot) poo-poo's decentralization here https://www.youtube.com/watch?v=DdM-XTRyC9c

but much of what he claims is wrong, adjacent to the truth but not quite correct or meaningful.

Let's look at breaking changes. Consider how breaking changes can occur:

1) You go around and get everybody to update their software (PITA and eventually impossible)

2) You just give up on the feature and decide we can live without it (a cop out)

3) You version the protocol.

nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 valuing simplicity rejected (3) in his writings early on for nostr. Because versioning multiplies complexity. You have to keep all the old code and have case dependent code for the newer code.

But I still believe that (3) is the only way out, and growing complexity is inevitable. Yes of course any change that can be made non-breaking is definately the preferred approach, but not everything can do that.

There are real world examples of this that are working just fine. The Vulkan API is in a sense decentralized. It works on many different hardware devices and with many different OS vendors. It is versioned. How did it not ossify? It's a fucking mystery ain't it?!

Also, Moxie talks about IPv4 not being able to get to IPv6 and IPv4 ossifying. But fails to mention the obvious: IPv4 is the greatest success story ever. Damn near everybody uses it all the time. So who cares if some parts of it have ossifed? Not me. And to be honest, parts of it (like congestion control) were able to change very late in the game. So this is a piss-poor argument against decentralization.

Also, Moxie talks about how many people are programming stuff and you can't keep up with all of them. But he fails to mention that less than 10% of those people are useful. Or that the management interference almost necessarily breaks any useful thing they end up doing. Against the view of all the pundits (Bill Gates most notably) open source software supercedes commercial software in almost every domain. Because in open source, and with decentralized solutions, the entire world participtes, rather than just one or two buildings in Redmond. And generally only the most intellgent high-IQ people can pick it up and run with it, meaning you have a worldwide team of highly intelligent people versus a limited commercial team that is mostly deadweight and plagued by managers who want to make their mark.

I may not be a bright-eyed (red eyed?) bushy-tailed spring chicken bitcoiner who is upbeat about everything and believes everything is possible. I'd say I"m a bit more cautious than most about what I hope for or aim at. But I am still a "can do" person and I will never stop trying.

*spits out the black pill*

Reply to this note

Please Login to reply.

Discussion

*sigh* here's your "amen"

> open source software supercedes commercial software in almost every domain

Absolutely false. There are some big ones but mostly niche domains. Most software that normal people actually use is not open source.

I think he means this as an inevitability. Every domain will eventually exhaust meaningful innovation at which point open source will start closing the gap. Eventually there is no money in continuing development and open source will eat that domain.

You know the point of diminishing innovation has been reached when the software in question moves to a subscription model or some other indicator of rent seeking.

One note on versioning.. it can be a form of centralization. Someone has to assign the numbers. This works well for common software projects, but I am kind of a fan of just making everything a hash.

Define your interfaces then hash it. That hash is your version. If you have some common way of defining interfaces, then even if a particular protocol gets reinvented two star systems over, they will interoperate just fine.

The more likely case where the people two star systems over go a different direction also works. You won't both end up with conflicting version numbers when you iterate from the same base.

The main problem is that 32 bytes of version numbers is a lot. We need ways to help people know what they are using. (If they care)

That is a good point. But I'm not as concerned with specification centralization. If that is attacked, well, people fork it. And we have to rely on people converging on something anyways, which means it will be forked but somehow there will be the most popular fork which people will follow. Hmm, reminds me of blockchain.

But as to the details, I'm inspired by a number of different mechanisms none of which I'm committed to. TLS uses a list of available algorithms, and social media could use a list of feature extensions... at least for additive. Or it can be FEATUREA_V1 and also FEATUREA_V2 where a server might support both for a long time, until V1 died out.

Yup. I am a bit torn as well. I've toyed with a few ideas.

I have some code that defines how a file can be saved a zillion ways. It uses 4byte enums. Note not u32 because screw endianness. Every file is just a byte string. The first 4 bytes tell you the encryption method. Strip those of and use the corresponding algorithm to decrypt. The first 4 bytes of the decrypted file tell you the compression algorithm. Strip them off and decompress. The first 4 bytes of that tell you the serialization method (file type I guess)

Makes it easy to just add new algorithms.

i'm a big fan of compact semi-human-readable sentinels in binary data formats where structure is flexible. i'm using 3 byte prefixes for a key-value store to indicate which table a record belongs to.

same reason i hate RFC numbers BIP numbes and kind numbers and nip numbers. they mean nothing without a decoder ring. they should be descriptive, or gtfo

in any case though, if you are going to need lexicographically sortable numbers, you want big-endian.

it's just nutty to use a number when the thing could be a mnemonic and is not going to be so numerous that you can't fit it into 4 letters.

btw, this is a long established convention, the Macintosh and Amiga system libraries both use this convention of 32 bit mnemonic keys, to indicate filetypes, as key/value structure keys, and so on.

humans are not fucking rolodexes. language is words, not numbers. it is irrelevant that words are numbers: numbers are not words.

> it is irrelevant that words are numbers: numbers are not words.

100%. Unless we want to remove the human brain from these systems altogether then we'll have to deal with its word-centric processing.

It depends on where you are in the tech stack. UTF-8 or even ASCII is not human readable, they are bytes that need to be interpreted as such and rendered as glyphs via a lookup table and a graphics engine.

If required, you can do the same for any enumerated type. The nice thing about text encodings is that they are widely accepted and have many implementations of renderers.

But if you are doing the tech right, most of your base protocol is not going to be human readable, not because legibility is undesirable, but because it is nigh impossible.

Think of the raw data on the line. You need source and destination IP addresses and port numbers. Then you need something like source and destination node IDs, an ephemeral pubkey or nonce (depending on your primitives)

The rest is just gibberish because it is encrypted. None of that can be easily made parsible by humans.

Next you have the task of turning that encrypted blob into something useful. You need more keys and signatures etc. Eventually you get some final decrypted unpackaged data that you hand off to the client application. The underlying protocol doesn't care what the bytes it encapsulates/decapsulates are. It can't, if it knew anything about it, you'd be leaking metadata for men in the middle to hoover up.

Once you get to the client application, I agree with you. You want your data to be human friendly.

Generally I am a fan of being careful of how complex data in social media etc gets interpreted because people tell lies. For instance, do you translate an npub into the name its owner chooses, the name the viewer chooses, or a name the community chooses? Same with profile picture.

The first sounds good, but you get impersonation or other lies. Numbers on the wire have to be interpreted the way the recipient wants, not the way the sender wants.

the decoder ring for UTF-8 is always available. a decoder ring for the meaning of kind numbers or BIPs is not.

you don't even need type values if your structure is rigid, like nostr events, you just define an encoding order, and if the field is fixed length it's redundant to have a length prefix for it.

also mentioning network addresses, even those are handled in a human readable form, and not the native. native form of an IPv4 address is a 4 byte for address and 2 bytes for the port.

anyhow, yeah, of course, you use whatever fits the task best but one of the biggest reasons i favor mnemonic sentinels in binary data is that these are stable values. if you define a series of numbers and a whole bunch of them become deprecated, there's holes in the number space, you can't reuse them without a breaking change. a 2 or 3 character sentinel (or even 4) is never going to change, and has enough space so that you can just add more with such as an extra character signifying version or whatever.

I think versioning is a great idea but one also has to realize that Nostr isn't a monolithic protocol. There are many pieces in different states of development. What seems to create a lot of friction is that those pieces can and do change frequently enough that you have to pay attention. Then after a breaking change it's not all that clear at which side of the break and app is operating. Versioning of each NIP would help there. Also merging a lot of draft NIPs that one has to search pull requests for when they could be right there in the big list.

Centralised systems can innovate much faster if you are innovating according to the consensus. If you want to disrupt or go against the consensus on key topics decentralised is the only way. Because it only takes a call for that innovation to stop real quick. The deterrence factor to develop against consensus is near 0 on a centralised system.

Kind of makes me think of how different governments also work. It might appear that a communist country has their shit together with their technology and inovations (closed source and directed) but a country that believes in free capitalism would possibly work more like free and open sourced software. Think of the structure of a government like a programming launguage and it's guidelines it has. Think of an amendment to a Constitution similar to a fork.

The problem is that the Central Banks around the world have been the ones dictating the code and the world is about to go open source.

i think what you guys need to do is stop thinking of it in terms of protocol and start thinking of it in terms of developer community

as long as everybody shares the vision of what needs to be accomplished there may be pockets working on different ways of accomplishing it and periodically some of the more successful projects may get consolidated while the dead ends are dropped

human DNA is code, and most of human DNA is dead weight. we share 98.8% DNA with chimps. 50% of our genes in common with a Banana. point being - our code is not that efficient - but it is good enough.

the elites are now working on human-animal hybrids by splicing DNA between species which normally cannot interbreed ... i wonder what would be the equivalent of that in software.

identify what EVERYONE agrees on ( we need a decentralized, censorship resistant social media ) and build a community around that. allow people to work on unrelated projects within that community but maintain dialogue / stay informed on what others are working on and why and periodically share notes / help each other out.

this may be inefficient but what is more important is to have an approach that cannot be backed into a corner or a dead end.

the collective consciousness of the community must keep evolving that's what matters.

sincerely, Dissy

Tbh I skimmed a few paragraphs 'cause that's a lot of words. I believe that the decentralized nature of the protocol makes it harder to innovate, but the biggest problem has nothing to do with that. I think the biggest problem is that developers and moderators aren't rewarded for their work because they don't own the users or posts on any client or relay they develop.