oh.. I was purposefully ignoring that. since I am allowing git to get all the references from the repo itself.

what is the perceived benefit of having state outside of the repo itself? git already does a good job figuring out states for itself. doesn't that open up the possibility of eventually consistent states when fetching?

Reply to this note

Please Login to reply.

Discussion

To remove trust in a git server to give you the state intended by the maintainer(s). To allow redundancy via multiple git servers.

I am out of my depth here in regards to maintaining open source repos. I need to read more about the threats that this solves. mind sharing some references?

If everyone uses a centralised git server then it is generally a trusted one. The centralised party may be obliged by a very powerful actor to push malicious commits to specific IP addresses as part of a targeted attack on a high value individual but I can point to an example of this happening. If it became public that this happened too often, it would come at a high reputational cost.

As we move away from highly centralised git servers operated by companies with a reputation to defend towards self-hosted servers or services by smaller and less know organisations, it becomes more likely that that the server could be actively malicious. our mantra is 'don't trust, verify' and our network is built on content being signed by their authors instead of a trusted third party. It there makes sense that maintainere sign repository state so we are just trusting the servers with our privacy, using the same trust model as the relays.

so this would be a way for me to validate that whatever the host server is sending me is signed states only, but requires that whoever is pushing signs an event. I think I get it now. thank you for the explanation.

my instinctive thinking is that git trees are very sensitive to unexpected changes and often plagued by conflicts over inconsistencies in the source code between remote and local copies. I am also assuming that maintainers hold the source code in a local copy. so in my mind, it is hard to force push changes that wouldn't be caught by git. then, the fact that you can just migrate your work over to a different server in a frictionless manner be enough to address the issue from the maintainers perspective, detect malicious intervention in codebase, update the repo announcement with a new remote url and it's done.

but I can understand that if we can actually prevent that from happening, by rejecting a push/fetch altogether based on a distributed state it does provide a better experience. thank you again for the explanation. I'll have to make adjustments on my side.

Yes, we touch on a well explored problem of establishing state in a distributed system (the Byzantine general problem).

the git protocol does a really good job of preventing unintentional force pushes as the state can be managed in a single place.

I originally had the remote helper push to the git servers first and only update the state event once it was accepted by all git servers listed.

But then in order to support the flow where the git server is waiting for the state event to authorise the push I started issuing the state event first.

It minimises the changes by fetching the state from all git servers immediately before issuing the state event and rejects the change if the commits can't be fast forwarded.

but if another maintainer pushes after I have completed the fetch and before I have completed the push then the state event would be out of sync with the git server.

I'm was thinking about having a `nostr.use-nostr-for-state` git config entry flag that defaults to true so that maintainers could use the remote helper without the state event element.