nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn

https://www.youtube.com/watch?v=R-5DHymkfzw

Very insightful. I love Rich Hicky's talks as well. Never written any clojure but it influenced me deeply.

If we treat relay as stateful objects and DVM as methods, should we just have a "function call" NIP?

You must know Datomic right? Datomic is a really interesting database but since it's close sourced, we can't play around and learn it's implementation.

Do you think we can implement something like Datomic in a Nostr relay?

Database as a value or relay as a value.

Reply to this note

Please Login to reply.

Discussion

Following up with this talk, since an event is immutable. It makes no sense to have a stream of immutable data of 1 item.

put in programming term:

get(event_id) -> stream makes no sense

get(event_id) -> event | nothing makes sense

Then, we should just have

http get/post for read/write a single event.

Web Socket everything is overrated.

The only use case that might justify stream of 1 event is that at the moment of querying, the event is not in the relay, but in the future the event might be published to this relay by other clients or relays.

But, in order for a user to even know the ID of an event, the event has to exist in some relay before the time of sharing.

http get(nevent) always makes more sense.

I forgot about datomic - that is exactly the model I'm proposing! Relays would be the data store part, and DVMs would be the database client that keeps everything it needs locally for fast and sophisticated querying.

But, if we have "function calls" over relays, would that put too much pressure on relay? Should both "calls" and "returns" be ephemeral? What if a client just talks to a DVM directly?

Sometimes I just feel it's unnecessary to let relays broker everything.

Relay is like a Kafka with cryptography. Using Kafka to do everything in the cloud was once a hot thing in the industry but it's not a silver bullet.

I think those are all good questions. Function calls over relays could put a lot of pressure on relays as data stores, but I think it'll be ok if we combine hash-based syncing with caching.

Using relays as brokers is also ok in many situations because DVMs basically end up compressing the results that relays need to serve through de-normalization. In other words, the relay only has to serve the full dataset once, and DVMs create a single number (as in the case of count) or small result set (as in the case of search/recommend). Caching the results for a short time could likewise reduce the load associated with calculating a result multiple times. You're right that for higher-volume/frequency/cardinality requests having DVMs also serving an http endpoint would be an improvement. @uncleJim21 wants this. I think we should exhaust the possibilities of DVMs before adding yet another interface, but I think we will eventually need it.