I understand. Maybe it would make sense to build a DVM for this that charges for said query, and potentially caches some of the stuff? Or a DVM for arbitrary queries of one's social graph? Do we have a query language for this?

CC nostr:nprofile1qqsfnw64j8y3zesqlpz3qlf3lx6eutmu0cy6rluq96z0r4pa54tu5eqpz9mhxue69uhkummnw3ezuamfdejj7q6hdgd nostr:nprofile1qqszv6q4uryjzr06xfxxew34wwc5hmjfmfpqn229d72gfegsdn2q3fgprdmhxue69uhhxct5v4kxc6t5v5hxs7njvscngwfwvdhk6qgkwaehxw309a5kucn00qhxummnw3ezuamfdejsz9nhwden5te0wfjkccte9cc8scmgv96zucm0d57awgv0

Reply to this note

Please Login to reply.

Discussion

yeah, that would be cool to have a specialised relay that just spiders the network constantly and only stores some small set of event kinds like this (wolud make sense to put profiles, follows and mutes, at least in one bundle) - then you literally can just have high confidence just from one query on it and done

and yeah, such a relay you probably would want to devise an npub compression scheme where it flattens the lists down by using a monotonic index number for each pubkey instead of storing them over and over and over again, and uses a variable length encoding so the actual size of follow events it stores is tiny

sounds like a fun project but it would take me a week or two to do it in parallel with my main paid gig

Yes. Can be done. For user search I actually pull all profiles from npubs from huge relays like damus and primal in a local database. Could also get their follow lists and then from there you could query pretty fast for followers etc. will take some time for initial sync and you need to update every couple of minutes but then it should work more or less fast locally.

Could be. I use lmdb right now. Rust-nostr offers nice tools for here.

thanks for the provocation though, i'm going to do something i meant to do for ages, which is to apply maximum possible compression to follow/mute list storage, by creating an npub index with a monotonic counter value, then i could add a profile/follow/mute list spider that just gathers as many of these as it can find during the spider runs... commonly follow lists in particular are in the hundreds of kilobytes in size, with this optimization i can squash that down like tens of kilobytes per user, and thus store ten times as much of such events