are backlinks part of the design or on the roadmap for NKB/Wikistr nostr:nprofile1qqs06gywary09qmcp2249ztwfq3ue8wxhl2yyp3c39thzp55plvj0sgprdmhxue69uhhg6r9vehhyetnwshxummnw3erztnrdakj7qguwaehxw309a6xsetrd96xzer9dshxummnw3erztnrdakj7qgmwaehxw309amksetpwshxsctswpuhgctkv4exutnrduhsw0qlr4 nostr:nprofile1qqsdcnxssmxheed3sv4d7n7azggj3xyq6tr799dukrngfsq6emnhcpspzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtcprdmhxue69uhhg6r9vehhyetnwshxummnw3erztnrdakj7hy49z6 ?
Discussion
Do you mean the tags for derivative events? I'm already using those in some of the publications and we have a ticket to display them in the cards, for Gutenberg (0.1.0).
Wiki viewer and internal [[linkage]] is coming in the Euler (0.2.0) version, since we need it for research papers.
I'm not sure, if that's what you meant.
Derivative link example for nostr:nprofile1qqsvdxm3m3tylhp4ptxal7ff7fwhyq4vz3cvsaygvz9adkvwgf46wccpr9mhxue69uhhyetvv9ujuumwdae8gtnnda3kjctv9uq3zamnwvaz7tmwdaehgu3wwa5kuef0qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0urhjv6's book, at the bottom of the json.
I mean the events that link TO a given event. inbound links. (Which I imagine is far harder to compute. And it would have to be client/relay side, not part of the event itself for a lot of reasons... it updating occasionally being one of them)
See Obsidian's concept of backlinks.
Or do you mean that you can click on a 30040 and see which 30040s it is nested in?
That's something that can be computed.
creating backlinks would require an indexing system
making events to share the indexing system could be done but it's really a job for a relay
my opinion is that nostr REQs are too limited, and to some extent purpose-specific indexes will emerge
they should be standardized to avoid fragmentation, and this is one reason I was saying it should be easy to extend relays
well, we could make a new variant of the REQ maybe? if you want to make a proposal i will give it a good going over and make sure it's solid, perhaps we can get others to help with this
part of the issue is to do pagination you need to cache queries and their results, so it must be spam-protected, and it's dependent on implementing a state cache for the queries
yes, 99% of extensions will act like tag queries, but based off of other things (such as links in content field)
I am considering an advanced search extension, which can be easier to implement in an external search DB
adding full text searching is independent of the pagination issue, there is a NIP for the search field, i forget what it is, but it's a separate matter from providing a query state cache and pagination that depends on that cache to make possible
you can make a full text search (including, potentially, searching tag fields, i guess) separately from implementing pagination
don't muddle the two things together, the relay can already signal search field support in nip-11, and the results from using this with a REQ as in NIP-01 you get an even bigger problem with pagination
so, solve pagination first, then add the full text search
NIP-50 has the problem that it is extremely limited
it's built in, you can construct filters for this... i was chatting with him the other day about various relay index related subjects, in fact, and really, to do more cool stuff with relays we need to have more indexes and probably a more advanced query mechanism
i don't think we realy quite need to implement graphql exactly, but simply adding full result generation and pagination, and the necessary garbage collected query cache that is required to serve up the paginated results correctly in an efficient form
i've been thinking about adding such a query mechanism to a NIP-98 authed HTTP endpoint, but it is quite a bit of extra work to cache queries
like, i'm writing an index for npubs right now, as my next current task as a mechanism for compressing especially follow/mute lists down to very small lists of index keys instead of the whole npub
so what i would have in mind is it accepts a standard query, and then gives you the metadata of the result, ie, total number of results, and it has a list of all the event serial numbers cached to a hash of the canonical form of the filter that generated it, and you can then ask it for result-per-page/page number and voila, pagination
but it needs a second, temporary index, it could be kept in memory or it could be stashed in a flat file under a hash of the canonical formatted filter
and yes, i already have a filter canonicalisation algorithm and have applied it somewhere, i forget exactly now, but it generates a truncated hash identifier of filters for some reason