Fucking Iris using GBs of browser storage and becoming unresponsive. I can't wait for all the features to get implemented in Primal. It's so fast, but it's missing zaps, notifications, DMs...

Reply to this note

Please Login to reply.

Discussion

Primal?

primal.net

It's slick and fast AF but not all features are implemented yet.

I submitted your feedback for auto-management of storage to keep under a size limit to #[2]​ github board.

What’s the right size? 1 MB, 10, 100? 500 MB?

I don't think there is a right size.

The problem is that Iris is a website and its fully implementing the protocol.

The local data issue is due to a scaling problem in the protocol. If you query as you want to view, you're potentially querying a relay you don't typically connect to or you're querying a relay thats very busy. You have overhead to connect to it perhaps or you're waiting on your turn. This is worse because everyone has 7+ relays and at least they're gonna query more than one to get a fastest return.

The amount of work the relays do in responding to client requests scales poorly. They rate limit these requests to keep from being DDoS.

I think we'll see that the websites/phone apps need to use a middle layer service and index that scales better and pulls notes only as you need them, and not just every note seen for only a few hours or days.

Fully implementing the protocol, a global protocol for every message basically, is going to need some hardware resources. It doesn't seem to work great in a light phone or web app.

Alternatively, you can limit things to only the last 1-3 hours, some small set of friends of friends, instead of friends of friends of friends or more, which is what it seems like it's doing if you close and reopen #[3]. It doesn't get slow or unresponsive, but it only has a few hours of the global feed. It seems to have no follower feed until global has synced those notes, and then you return to the follower feed.

The problem is that the relays don't respond promptly to queries so we need to sync and save every message seen, for viewing it later on demand. If the relays were fast like professional cloud services with indexed databases, etc. then we might be able to dump the sync data and just query as we go, which is how a web app is supposed to work.

The data needs to be removed from the web site/browser cache. That will never work well enough for the amount of data used in a day or week, from 5-10 relays. It needs to be put into a real database, a real cache, a real index and that would only be possible for a service style client like #[5] and Nostragram, or a full database backed desktop client like #[6]