I'm the author of gossip, a desktop client not a mobile app, but I have someting to say on this. Gossip downloads only about 4MB when I start it in the morning and run it for an hour. Since that is several orders of magnitude less than some other clients, I thought I'd make a list as to why:

1. Duplicate Events - many clients subscribe to the same filters on all of the "read" relays. So if a person has 10 read relays, they get each event 10 times. They could subscribe in a way that only gets N copies, where N is set in some setting somewhere (gossip defaults to 2 or 3).

2. Not Dynamically Connecting to Relays - when clients don't dynamically connect to the 'write' relays of whoever you follow, users are incentivized to add lots and lots of relays as a hack to try to get that content, aggrevating issue 1. If clients smartly went to the write relays (based on relay lists), all of the content a user has subscribed to would arrive (in best case scenario) and users would no longer feel the need to add massive numbers of read relays.

3. Counting how many followers you have is expensive. Kind-3 contact lists are long, and you need to pull one for each follower to make such a count. Especially if done across many relays (where the same ones are pulled multiple times, once per relay), this could be 10-20 MB on it's own. Then how often is the client triggered to recount?

4. Downloading of avatars: gossip caches these so it doesn't have to re-download them. Any client that uses an IMG tag and doesn't have a browser caching is probably downloading these over and over, at worst case every time a post scrolls into view.

5. Content images and content web page pre-rendering: This can be very expensive, but is probably unavoidable on rich UI clients. Gossip is a "poor" UI client, without any images or prerendered links (it just shows links that you can click on to open your browser). But with caching, repeated downloading of the same thing can be avoided.

6. Re-checking of NIP-05 could be done periodically, perhaps daily if it failed or every 2 weeks if it passed, probably the worst strategy is every time a post scrolls into view.

There are probably others.

Reply to this note

Please Login to reply.

Discussion

#[0]

#[1]

#[0]

#[0]

#[0]

Sent some sats, went to try it but the windows link 404's...

I was on this part of your page.

ArchLinux: https://aur.archlinux.org/packages/gossip or https://aur.archlinux.org/packages/gossip-git

Debian: See the https://github.com/mikedilger/gossip/releases area for a file named something like gossip-VERSION-ARCH.deb.zip

Microsoft Windows: See the https://github.com/mikedilger/gossip/releases area for a file named something like gossip-VERSION.msi.zip

have grabbed file you linked to :-)

Oh I see, the README Link is broken. Will fix. Thanks.

Oh that's wierd you are right, the README is okay, github is rewriting it into a broken URL.

We need that nostr replacement of github.

Ciaone

🤙

Reading this note on Gossip... 🤙

🤝

#[4] #[5]

Thank you for sharing. Very helpful to know.

#[4]

NIPS needed:

1) "select: ['id']" filter so you only get matching event ids and not the full event 10 times. You can then ask only 1 node for the full event, or skip if you already have it.

2) Bloom filter for "I already have these events, don't send them". Can be resource intensive on the relay side and not usable in all queries.

3) Follow distance filter for authors, so you don't get irrelevant spam. Iris discards stuff from unknown authors on the client side, but would be better if they weren't sent and processed in the first place. Shouldn't be difficult to implement with some SQL wizardry.

Thanks for answering question. Answer me this one:

Why does the zaps button keep appearing and disappearing on iris.to?

On profiles or posts? Haven't seen before.

posts. Lightning addys are on profiles. but I am missing the zaps button on posts quite often.

On my cell, Nostr runs off of Firefox. I reload the app by refreshing the browser button. When I reload it, the lightning bolts appear.

Which client r u running?

Iris

I'm running the iris.to client on firefox and once in a while I see the zaps button but usually not.

On my desktop.

I like these ideas.

1. I think the 'select' filter isn't really a filter (selecting which events) it is requesting a different kind of result so it should probably be a new REQID message (IMHO).

2. Bloom filters are interesting. strfry is using merkle trees, presumably to see which branches need refreshing... but I'm not sure how applicable that is to a much smaller set of IDs or how it compares to bloom filters.

3. I'm not sure we can presume a relay has everybody's contact lists and can compute a follow distance. Maybe some relays could specialize in doing that though. Clients probably need to have a concept of which relays are "aggregators". All kinds of aggregations like this, reaction counts, how many followers someone has, it all depends on a relay having all that data which I suspect over time will be a less and less accurate assumption for general relays as people move off the central crowded relays.

One option is a bloom filter of authors. It's not good for history queries, but for new events it would be useful. 10k authors from your social graph with 1% false positive rate would be around 12 KB. Would limit incoming spam nicely.

👍

Hi Martti, maybe unrelated to this post, iris.to desktop client is great missing below zapping implementation on snort.social, any plan to add?

Is this video capture from your browser app on iris.to? I do not see this when clicking on sats icon. The HREF link starts with "lightning:" and there is no app register for it.

It's the browser version

All browser, it just doesn't work on any browser on Windows OS. It works fine on phone. Same works fine on snort.social .

#[5]

#[6]

#[7]

#[4] helpful info ☝️

damus does all of these except 1 since that would possibly miss messages

Excellent!

Nice.

Point 1 can't really be addressed without addressing point 2 first because, as you say, you would miss messages. If you know that person P posts on relays (X, Y, Z), you can ask 2 relays (X, Z) for those notes and because they both should have them, it's usually good enough. If not, ask on 3 relays.

If you instead ask the user-configured relays for P's posts, then asking only 2 of those relays is probably going to miss tons of messages. But even if you ask all of the user-configured relays, you will both miss many messages and will also download a lot of duplicate data. The only way to get all the messages of all the people followed (and avoid excessive duplicate downloading) is to go out there to that unloved relay in Timbuktu that the user didn't configure (but that P has in his relay list) and fetch them from where P wrote them.

Of course, fetching from relays that the user didn't configure has implications.

theres another issue where many relays are overloaded and don't return things in time, so relying on a subset of relays can make loading slower. Perhaps you can collect response time stats and prioritize the fast ones? Fetching from all just seems simpler even though it can be bandwidth intensive.

So my client subscribes to all the pubkeys I follow, could it remember which 1 or 2 relays responded fastest for each pubkey and after the first poll only subscribe to the fast relays for each pubkey for remainder of the session?

Seems like it could save a lot of bandwidth and relay load.

Fetching of events not be a lot of bandwidth relatively speaking, it would have to be measured. I suspect web content referenced by events constitutes a lot more data than the event structures themselves.

Your third point is interesting.

Yesterday I mentioned separating identity and clients more, Nostrgram has gone the other way and created a big db to cache all that info and just keep most current kind 3s.

Would it make sense to have a different relay type for these? Melvin has mentioned potentially a different type for git on nostr as there’s no sense filling feeds with code, seems to me these contact lists and profiles could do with their own handling also and that would potentially simplify your desire to follow users to where they read/write.

#[0]

Proudly reading your post in the freshly released version 0.4.0 🤙

Data usage largely depends on the number of features you want to present to the user at once.

New clients tend to be faster/data-leaner because they don't do much.

That # of features drives most of the pings, redownloads, metadata etc. For instance, if you want to make sure NIP-05 is valid, because if it's not it's going to be a problem for your users, you MUST check at every post. There is no way around it. That's a feature. And every feature has a cost. You can also choose to let a few hours pass before you alert your users. That will use way less bandwidth, but your users might suffer from it. It's a choice.

Trade offs.

It's all about trade offs

#[0]

#[0]

#[0]