š
Thanks. I didnāt read it negatively. š
Itās still a lot of work to productise, and I was on the fence putting the effort in - but Iām more and more seriously considering it. I see the value and people really like it.
Yep. Itās an incredible feature. Thanks again for building it!
When you spend time in other countries or learn other languages you realise how much English alone excludes you from accessing. Awesome foreign films and media, lots of web based conversations, more diverse local news coverage from people, instead of MSM crap.
You may have heard about "the gossip model" which has this horribly confusing name because of the client called Gossip at https://github.com/mikedilger/gossip -- which has nothing to do with actual gossiping. It should be called the "outbox model", that would be a better name.
But take a look at NIP-65. The basic idea is that you can find the relays someone announces they're publishing to and you can read their notes -- and their notes only -- from there. See also the Nostrovia podcast episode with Mike Dilger, https://mikedilger.com/gossip-relay-model.mp4 and https://fiatjaf.com/3f106d31.html
#[3]ā I had an issue yesterday in gossip with the model, as it picked two main relays (each with 90% of my followers) - the second catch was one of those relays kept disconnecting and was unstable. That means I was left with one relay for 90%.
I know there is a setting for āmin N relays per pubkey up to max of M relaysā. It wasnāt working. Maybe as using master.. but itās an edge case that the model needs to allow for - unstable relays. Maybe the gossip client supports it or partially⦠but didnāt seem to be working.
This is the latest profile/meta event Iāve seen for your pubkey. Itās missing lud06 or lud16 values, which are lightning address (LNURL format) and lightning pay address (email format). You only need one.
I can easily halve it too as a chunk of double handling.
Ha. Thatās just a once off calculation from beginning of time. I can easily do delta updates from there.
Itās a shared 4 vCPU 8GB. And itās running other workloads.
For my pubkey it was seconds. @jack is an outlier for sure.
With roll up the query goes from 400ms to like 1.5ms with hourly resolution.
Yep. NostrGraph.
Most people it loads ok as raw data up to around a month with 4+ graphs on the same page. Iāve created roll up tables and will add delta/incremental aggregation. Should be fine then.
Canāt make it public like this for all pubkeys. Maybe less granular with limits. Likely a payment.
Hvala. Slowly getting closer to making it scale. It took 24 minutes to process the roll up data for @Jack for 3 months. Ha.
Coracle does it too. Gossip has an opt-in setting for it.
I think from a data perspective itās not valuable enough to enable by default. Why bloat client cache, databases and bandwidth? Itās just a tax on each event.
** to your read relays.. I donāt know now. Maybe to your paid relays. š
I forgot a point. Ha.
Basically, maybe Nostr clients can have a setting to publish inbound DMs to your write relays as they are received.
Just an idea anyway.
If users are paying a relay or service for storing their events, they likely will want to re-publish their inbound DMs to that same service.
Whatās interesting is that while the pubkey isnāt yours, technically itās data you want to have access to and likely redundant or backed up.
How does this impact or influence relay data retention or usage quotas - not sure. A spammer could flood you with DMs and bloat your usage limit. Or your DM sender may now be using the same paid relay storage as you - or may stop paying one day and all DMs gone.
One approach could be if you AUTH to a relay, then it can persist that DM from another pubkey under your account or data usage limit (or just detect it is an inbound DM for you). You could then see events you re-published/broadcast/store from other pubkeys and manage that data. Great.
Next consideration is can the pubkey sender now delete your inbound DM on your paid relay with a kind 5?
Ha. Nostr is deceptively simple. The real world fights back.
The other key use case I have is effectively batching. Where you may want to lookup N profiles and understand which profiles you need to try fetch again from different relay/s, as didnāt return results - you got EOSE before a match for M query keys. Obviously max retries or attempts is needed as there may not be any results to find.
This is most useful when querying for updates against local cached event state. Or even against what your DB has.. like refreshing user profiles.
Nostr. A short story š 
Found this AI generated Nostr reply bot.
It feels like itās just trying to hear itās own voice and say something without adding any value. ChatGPT suffers from the same issue.
#[0]
Iād love it if an client app dev could map out common Nostr subscriptions that can be used across apps. Even it just definitions and an example query.
There is a lot of commonality, buts itās obscure today - and more common queries can help relay architecture and caching strategies too.
Obvious ones are:
# App User State
Get kinds 0/3/10002 for current user (priority)
Get kinds 0/3/10002 for current user following (maybe with a since if fetched recently)
Get kind 4 received DMs (priority) (query from N following write relays)
Get kind 4 received DMs (low priority) (query from your read relays)
Get kind 4 send (should be cached, but maybe more from other apps - use since)
Get notifications (optional following pubkey filter) (priority for following write relays, and your read relays)
# Timeline view
Get home timeline (posts only) (with stats?) (should focus on querying best N relays for each following user - not query all relays with same query)
ā¦
# Event detailed view
Get reactions for an event
Get reports for an event
Get parent event chain
Get child event replies
ā¦
# View profile
Refresh meta
Load their timeline
Get stats like followers, zaps, following, etc.
ā¦
Many of these can likely be merged or carved out into common queries.
This search filter NIP can help. Only a few relays I know of support it. https://github.com/nostr-protocol/nips/blob/master/50.md
I think clients should cache/index your own events better. My whole 3,500 event history is 3.5kB. Donāt even need to search a remote server. Itās also a poor manās backup source then.
A definitely see long term retrieval services/relays existing.
** 3.5MB
