Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

Thanks. I didn’t read it negatively. šŸ™‚

It’s still a lot of work to productise, and I was on the fence putting the effort in - but I’m more and more seriously considering it. I see the value and people really like it.

Replying to Avatar fiatjaf

You may have heard about "the gossip model" which has this horribly confusing name because of the client called Gossip at https://github.com/mikedilger/gossip -- which has nothing to do with actual gossiping. It should be called the "outbox model", that would be a better name.

But take a look at NIP-65. The basic idea is that you can find the relays someone announces they're publishing to and you can read their notes -- and their notes only -- from there. See also the Nostrovia podcast episode with Mike Dilger, https://mikedilger.com/gossip-relay-model.mp4 and https://fiatjaf.com/3f106d31.html

#[3]​ I had an issue yesterday in gossip with the model, as it picked two main relays (each with 90% of my followers) - the second catch was one of those relays kept disconnecting and was unstable. That means I was left with one relay for 90%.

I know there is a setting for ā€œmin N relays per pubkey up to max of M relaysā€. It wasn’t working. Maybe as using master.. but it’s an edge case that the model needs to allow for - unstable relays. Maybe the gossip client supports it or partially… but didn’t seem to be working.

This is the latest profile/meta event I’ve seen for your pubkey. It’s missing lud06 or lud16 values, which are lightning address (LNURL format) and lightning pay address (email format). You only need one.

https://api.nostrgraph.net/beta/identities/npub1sl2d802cn5fcm204auehe4d40gu8wf9nnj6jfdna532grq3wp6pst8u52z.json

Ha. That’s just a once off calculation from beginning of time. I can easily do delta updates from there.

It’s a shared 4 vCPU 8GB. And it’s running other workloads.

For my pubkey it was seconds. @jack is an outlier for sure.

With roll up the query goes from 400ms to like 1.5ms with hourly resolution.

Yep. NostrGraph.

Most people it loads ok as raw data up to around a month with 4+ graphs on the same page. I’ve created roll up tables and will add delta/incremental aggregation. Should be fine then.

Can’t make it public like this for all pubkeys. Maybe less granular with limits. Likely a payment.

Hvala. Slowly getting closer to making it scale. It took 24 minutes to process the roll up data for @Jack for 3 months. Ha.

Coracle does it too. Gossip has an opt-in setting for it.

I think from a data perspective it’s not valuable enough to enable by default. Why bloat client cache, databases and bandwidth? It’s just a tax on each event.

If users are paying a relay or service for storing their events, they likely will want to re-publish their inbound DMs to that same service.

What’s interesting is that while the pubkey isn’t yours, technically it’s data you want to have access to and likely redundant or backed up.

How does this impact or influence relay data retention or usage quotas - not sure. A spammer could flood you with DMs and bloat your usage limit. Or your DM sender may now be using the same paid relay storage as you - or may stop paying one day and all DMs gone.

One approach could be if you AUTH to a relay, then it can persist that DM from another pubkey under your account or data usage limit (or just detect it is an inbound DM for you). You could then see events you re-published/broadcast/store from other pubkeys and manage that data. Great.

Next consideration is can the pubkey sender now delete your inbound DM on your paid relay with a kind 5?

Ha. Nostr is deceptively simple. The real world fights back.

The other key use case I have is effectively batching. Where you may want to lookup N profiles and understand which profiles you need to try fetch again from different relay/s, as didn’t return results - you got EOSE before a match for M query keys. Obviously max retries or attempts is needed as there may not be any results to find.

This is most useful when querying for updates against local cached event state. Or even against what your DB has.. like refreshing user profiles.

Nostr. A short story šŸ˜‚

Found this AI generated Nostr reply bot.

It feels like it’s just trying to hear it’s own voice and say something without adding any value. ChatGPT suffers from the same issue.

#[0]

I’d love it if an client app dev could map out common Nostr subscriptions that can be used across apps. Even it just definitions and an example query.

There is a lot of commonality, buts it’s obscure today - and more common queries can help relay architecture and caching strategies too.

Obvious ones are:

# App User State

Get kinds 0/3/10002 for current user (priority)

Get kinds 0/3/10002 for current user following (maybe with a since if fetched recently)

Get kind 4 received DMs (priority) (query from N following write relays)

Get kind 4 received DMs (low priority) (query from your read relays)

Get kind 4 send (should be cached, but maybe more from other apps - use since)

Get notifications (optional following pubkey filter) (priority for following write relays, and your read relays)

# Timeline view

Get home timeline (posts only) (with stats?) (should focus on querying best N relays for each following user - not query all relays with same query)

…

# Event detailed view

Get reactions for an event

Get reports for an event

Get parent event chain

Get child event replies

…

# View profile

Refresh meta

Load their timeline

Get stats like followers, zaps, following, etc.

…

Many of these can likely be merged or carved out into common queries.

Replying to Avatar Blake

This search filter NIP can help. Only a few relays I know of support it. https://github.com/nostr-protocol/nips/blob/master/50.md

I think clients should cache/index your own events better. My whole 3,500 event history is 3.5kB. Don’t even need to search a remote server. It’s also a poor man’s backup source then.

A definitely see long term retrieval services/relays existing.

** 3.5MB