no, it's just an API detail

instead of returning events, the filter and fulltext search endpoints are specced to return simple lists of events in whatever expected sort order (i'm thinking to put the since/until/sort as parameters to the endpoint, because currently they are by default ascending order, which people may not want, in theory they could even be further sorted by npub, or kind

the reason for changing the API to only return event IDs is that pushes the query state problem for pagination back on the client, the relay doesn't need to additionally keep track of the state of recent history of queries to enable pagination, and i don't like that shit anyway because it inherently is inconsistent, as in, the query could return more events at any moment afterwards, so what do we do if we push pagination on the relay? do we make it update those things? then the client will get out of sync as well

implicitly any query with a filter that has no "until" on it searches until a specific moment in time, the time at which the relay receives the query, and the identical query 10 seconds later could have more events at least in the space since that time, not to mention it may get older events that match pushed to it by spiders or whatever in between

so, i say, fuck this complication, you just make an index on event IDs to event serials and then you search the indexes created by the filter fields, and then find the event ID table and pull every one of those out, and return them sorted in either ascending or descending order that they were stored on the relay (which is mostly actual chronological order)

idk, maybe i should add a timestamp to that index so this invariant can be enforced

anyway, i'm interested to hear other opinions about why the relay should implement filter api differently than i described, but i have thought a lot about it and i'm leaning very much towards returning IDs so the client manages their cache state instead of pushing that on the relay to give people their precious pagination

i already got too much complexity in here

Reply to this note

Please Login to reply.

Discussion

> instead of returning events, the filter and fulltext search endpoints are specced to return simple lists of events in whatever expected sort order.

In our case that would work poorly. Kind:0s are replaceabe, so if we returned the list of kind:0s sorted by the npub's rank, the result can hardly be reused later. One of the kind:0s might become outdated the next time you want to use it.

We care about this because it's quite a lot of work for generating these responses. That's why we return hex pubkeys sorted by the npub's rank. The rank of these pubkeys is stable over time except for edge cases like a key hack.

I think our case is different enough from the normal use of relays. For most other cases, returning event IDs is a solid choice I think.

this is why i'm prompting you to think about what you think a helpful API for your task would look like, because after i'm done making the basic replacement for filter search and HTTP for everything else using nip-98 and optionally JWT, this is the kind of thing i can see becoming useful

right now, #realy is a bit messy in the sense that things are all still a bit jammed together in ways that they shouldn't be, and some things are separated and replicating things in ways they shouldn't be

the ideal situation is where you can define a simple single source file that specifies what parts are available, so eg, we have a standard NIP-01 implementation, and added to that is a spider that farms the whole nostr network for this data, and then it exposes protected endpoints that yield search results that precisely fit the needs of vertex

so, yeah, from what you are describing, right off the top of my head i can picture something like an endpoint called `/directory` which takes a parameter of the last-updated-timestamp that you are interested in (as your database already has everything to that moment) and it spews back all of the event kinds newer than that in one big shebang and that funnels into your graph generation pipeline

yeah, DVMs can all become/be morphed into standard http APIs.

I guess one loses the simplicity of having only one communication protocol (websocket) though.