I mean, that means you need at least 4 different programs to use the current gitstuff:

Ngit, Ngit viewer, Nostr relay, Blossom. 🤦🏻‍♀️

Just pile on the programs, bro. I'm sure GitHub is shaking in its boots. 🙄

Reply to this note

Please Login to reply.

Discussion

one of the nice things about using http/websockets is we have many options

i already have simple web services that you can dump or pull events en masse or filtered by pubkey, currently fixing it at the moment to play nice with the pubkey indexes

it is trivial to add a path-defined HTTP endpoint to POST stuff to and you don't even need special tools to do it, but also, there is very many functions in all the programming languages libraries to push and pull data via post, get, and put (and whatever else) over http, you can even have more fun things by turning paths into parameters for these things as well as using json or whatever format, or HTTP headers to specify parameters

ultimately blossom is actually just a http server with a specific HTTP API based on paths, i believe, or maybe it uses ?parameters=thing as well or can do

if you want to drop giant loads into your relay rather than mess around with interactive sockets if your task is singular, in, or out, it's dead easy to do it with http, you can control access with http-basic or - i haven't got this done yet, that nip-98 stuff, that uses auth events in their place

after having built a http import/export and your specific task requires sometimes bulky up or downloads i think this is what you should do, it's really simple, i wish i could convey to you how joyously simple it is to write web services with Go, but probably it's similarly simple with php

and not only that, as is done with strfry, you can even have http endpoints invoke other programs and pipe them data directly, like a series of tubes (and so on)

so, really, i think what you need is to learn how to write simple TTY savvy things that can read stdin and spit stuff to stdout and yell at you over stderr, and i think chiptuner will also concur with me about this, though i don't know how fun http or socket programming is in C but it's sure as hell fun with go

As far as I know, Blossom is not an event store, but a file store. If you put it on Blossom, you can only refer to it from events. It is external to Noster json events. You've then exported the events to a different system, and you lose having Nostr interact directly with the events.

But, then, why bother? Why not just use nostr to reference items on a gitserver, and skip the other steps.

blossom is just a content addressed network file store

you store stuff, and the hash of the stuff is the address you can retrieve it with

if you want to do a path-based file store it's not difficult to do either, but it requires a file-to-mapping table, a tree structured linked list

i think webdav is an example of a protocol that does this already that you can probably find a library or canned server to redirect traffic for this purpose as the case may be

i wrote a simple path based store on top of badger that i plan to some day use as a configuration store... or something

it's a fun subject, really, store and retrieve, but how do you want to address it

Would it work on Nostr events stored on relays?

nostr events are addressed the same way, by the event id, which is a hash

it's even something i thought about doing - creating a layer 2 that uses blossom with the ids as the ids... the only tricky part is that blossom expects that to be the whole event but the thing that makes the hash is missing the signature, and the signature then has to be added but it can't be part of what is referred to by the content, it has to be attached to it (this is how i implement the event store on realy)

probably a simple extension to blossom could be suggested where you can provide an npub and signature adjunct to the file being identified by the hash, but this is precisely the point you are digging at - what about we just use nostr event format... i mean, can store arbitrary sizes with the small limitation of using 6 bits per byte as base64 encoding, this is a limitation of json

how it's actually stored, can be entirely different, it can be binary encoded, or you can mess with it and use json for the metadata and then attach giant blobs of binary to the end and store that against a key in a key value store

if i were to say how i would prefer to do it - you'd have pubkey/sig/blob

you could search the events by pubkey and blob hash and verify their authenticity with the sig

i could so easily make a badger based store that can do this on a http endpoint with an api for "by blob" and "by pubkey"

this is the thing

the nostr event structure is practically a file metadata... even gives you arbitrary tags to add extra things to filter it with

like nostr:npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj the biggest problem with the filter query protocol is the lack of pagination

i could even think of a way to fix this by adding a new envelope type that connects to a query cache

so, you send query, the relay scans for matches, and assembles a cache item, which contains all of the matching event IDs plus the filter in a queue item

this item is stored in a circular buffer so when the buffer is full, the oldest ones are dropped to make room for the new ones

in addition, to be clear, the event IDs are already indexed to a monotonic index value in the database, so it's not very big amounts of data, each event in the result is simply an 8 byte (or like fiatjaf used, 4 byte) serial number and done

i used 8 byte because i think 4 billion records is not very much when average size is around 700bytes

the biggest problem with all of this is encoding

JSON makes binary data somewhat expensive to store, because you have to use base64 and even though you can use unicode i don't know of a scheme that leverages unicode to improve the ratio from 6 of 8 bits per byte of data to probably very close to 8 of 8

TLVs are a very nice format for this kind of thing, so you have type code, then a blob length, and then the data, and the type code can be human readable and so can the length value, probably you just need some kind of separator between them... think like a tags, but instead of kind/pubkey/d-tag it's like, 4 character magics and decimal size values: JPEG:1000020: and then at byte 1000021 a new one that is like HTML:10002:....

Do you mean git blob hash?

That was nostr:nprofile1qqs8qy3p9qnnhhq847d7wujl5hztcr7pg6rxhmpc63pkphztcmxp3wgpz9mhxue69uhkummnw3ezuamfdejj7qgmwaehxw309a6xsetxdaex2um59ehx7um5wgcjucm0d5hsz9nhwden5te0dehhxarjv4kxjar9wvhx7un89uqaujaz original NIP-62 idea, but it got slapped down because everyone wanted the commit content in events. And now they're creating Blossom blobs that are copies of the git blobs, or something.

yeah, i think the commit hash in the events and that refers to a blob hash that is stored in blossom is the way to go

the thing is that i don't think Git uses sha256, so you'd have to have a variant of blossom that uses whatever hash it is... md5? idk 😕

git seriously needs to be upgraded as a protocol, to be honest... it was SHA1, i remember now...

SHA256 is already supported

ok so that means that you can store the nodes in events and refer to blobs to fetch them

blossom imo as a protocol is garbage, as it tries to consolidate management (upload/delete/list) with retrieval of blobs

it is a big pain in the ass for scaling, look at any service and you will see cdn domains are separate from upload

blossom also makes no attempt to allow media optimization, and I believe it is an acceptable tradeoff to sacrifice integrity for reduced data usage if you can turn it off as needed

blobs should be identified by nostr event IDs, meaning you get metadata for free, and if a user wants their blob gone, they can issue a delete event and send it to all hosts

rehosting content becomes an explicit action

Well, after Alexandria Gutenberg we're going to work on Aedile SDK for a bit, and we can maybe think up a scheme for Blossom 2.0, while the guys fiddle with C++..

Code name Weed.

the use of nostr as a flexible mechanism of reference is the elephant in the room

Yeah because it doesn't make sense to rebuild git servers from scratch out of Nostr events when we already have git servers.

Yeah ephemeral events for synchronizing state is a good use of Nostr's event-driven nature.

#Programming sockets in #C takes some getting used to, then it's fun .

yeah, programming networks is easy in go, and then it's fun

😂

Because blobs are in fact very different than indexable small events

This is why “object stores” are separate from “databases”

Files can be easily cached as the only operation is a key read, and can be served from high-throughput high-storage servers

Events cannot be easily cached

You can store them in a local relay.

I see Nostr adding value, if the commits are broken down into pieces, but not if they are just one big event.

What I am referring to here is on the server side.

Oh, okay.

and also distinct from filesystems, which also have various specifics of metadata

there are only 3 operations that matter

- write

- read

- delete

usually, the write operation will never issue the same key twice to different content (after a delete)

deletes are eventually consistent