I doubt this is implemented anywhere. For a profile page or note reactions, replies, zaps. YES, it works.

For the main feed, it becomes very hard, you have only 2 options. Each with their pro and cons.

Option 1: Fetch each user with its set of write relays in a separate request

Pros:

- Don't use too much bandwidth

Cons:

- Create a lot of requests, Some popular relays will reject your query for doing too many requests.

- The client handle potentially thousand of relay subscriptions all at once.

Option 2: Get all the relays for your entire FollowList, deduplicate them.

Pros:

- Create only one fat request, easy on the client, popular relays have just one request coming in.

- You won't miss a single note.

Cons:

- Use a shitload of bandwidth, up to 1gigabyte after just a few reloads in my tests.

- Connect to an infinite amount of relays at all times.

Reply to this note

Please Login to reply.

Discussion

we couldn't agree either and have two solutions.

Lists:

get the nip65 data and calculate the best relays to connect to with a given coverage

pro: gets the theoretical ideal set of relays

con: does not work well on threads

JIT:

figures out relays just in time with a scoring of usefulness

pro: works in a lot of cases

con: not as accurate

I have been reading through. Here is my understanding.

- lists is one big requests with a set of relays that cover each authors (each author connect to at least one relay in the list)

- JIT is used to query the notes for specific user, that uses different relays than the listed ones.

almost the goal is always to find the optimal (least connections) relays for a given pubkey list. The request are then split based on what we know.

meaning if you do a request with ndk we might split the request and send partial filters to different relays.

The difference in lists vs jit is in the lifecycle.

yes I see, this is something that I considered too. still seems imperfect and non-deterministic. You could still end up in case 1 or 2 of my initial post, though it is an optimisation I agree, the cost of the implementation on the client is still unknown.

To limit bandwidth and ressource usage, I think the best should be to spin up one large request with a set of predetermined relay (that the end user could eventually change on the fly), and have a service on the background, that will run an EOSE request for 10 user at once, and roll it over the entire followlist. It would feed your cache, and forward all missing event to your frontend.

That service could either run on the client, or be hosted as a DVM, or just an API endpoint.

interesting, the parallelisation could be adjusted to tune dublicated data. But I am a bit sceptic of latency, could take quite a while to fetch data for ~200 npubs, right?