i could so easily make a badger based store that can do this on a http endpoint with an api for "by blob" and "by pubkey"
this is the thing
the nostr event structure is practically a file metadata... even gives you arbitrary tags to add extra things to filter it with
like nostr:npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj the biggest problem with the filter query protocol is the lack of pagination
i could even think of a way to fix this by adding a new envelope type that connects to a query cache
so, you send query, the relay scans for matches, and assembles a cache item, which contains all of the matching event IDs plus the filter in a queue item
this item is stored in a circular buffer so when the buffer is full, the oldest ones are dropped to make room for the new ones
in addition, to be clear, the event IDs are already indexed to a monotonic index value in the database, so it's not very big amounts of data, each event in the result is simply an 8 byte (or like fiatjaf used, 4 byte) serial number and done
i used 8 byte because i think 4 billion records is not very much when average size is around 700bytes