That sounds exactly like something I had optimized in my attempt to program a relay, using the Bloom filters you mentioned elsewhere.

For a relay there are two phases when processing a REQ: First, the DB gets queried and then a subscription is held in memory and applied to every new event.

A relay could ignore a big REQ based on its size and not do anything with it, return the stuff from DB and kick it from memory or keep it in memory to check every new event against it.

My relay took a different approach: When EOSE is reached, the REQ got compacted using Cockoo filters in order to make the subscription cheaper at the cost of occasional false positives. See here:

https://github.com/Giszmo/NostrPostr/blob/master/nostrpostrlib/src/main/java/nostr/postr/Filter.kt#L195

Reply to this note

Please Login to reply.

Discussion

How memory-hungry are today's big relays? Is it true that some process big REQs on their DB but drop the subscription then? Which portion of the memory cost falls on active subscriptions? How long is the average client connected?

nostr:nevent1qvzqqqqqqypzq3huhccxt6h34eupz3jeynjgjgek8lel2f4adaea0svyk94a3njdqywhwumn8ghj7mn0wd68ytndw46xjmnewaskcmr9wshxxmmd9uq3wamnwvaz7tmjv4kxz7fwdehhxarj9e3xzmny9uqzpewgkh6n77ktlk026d2vhqmy95jnpcjgqwplhy4a293unavvljtn6vvr0s

strfry does some insane optimizations here. I recommend studying its approach.

Thanks. Read the documentation but couldn't find anything. Guess it's in the code then 😟