Publishing trusted assertions would be great because we can cache them on the client in the same way we cache everything else and use Nostr REQ requests to see if updates are available as users navigate.
All clients do that already for Kind 0 (to get user's name/picture). It is very easy to just add a new kind to that same request and get the scores as well. And since these are replaceables, they update only when needed (when the score actually changes, which is rare) and the EOSE can be used to only request for changes since the last event, avoiding the data use of downloading the info for the same user over and over again because we don't know if they have changed or not in the server via other methods.
I will be adding them to Amethyst next, so if you publish it, let me know because I just need to allow users to set up the WoT trusted provider to your pubkey and then they will see the scores in their interface.
The search interface needs an API, though. So, we might need to define the inputs and outputs in a NIP. This could be via http calls, like how blossom does, via DVMs or via NIP-50 itself (user auths and sends a search that uses the WoT of the authed user).
Yes, that sounds great! I think that trusted assertions and a DVM-like API (using CVM) would be sufficient for a wide range of use cases. Trusted assertions alone wouldn't be enough for the case I mentioned earlier, where you encounter a user who doesn't have any assertion attached. To get the score in such cases, you would need to call a service requesting the score for that specific user who doesn't have any assertion attached.
The search interface API we already have in Relatr is quite straightforward. It is specifically designed to be unopinionated, just a required 'query' parameter, which is a string. There are other parameters, but they are totally optional.
Regarding the interface for the API, yes, it can be exposed in different ways. I like what Profilestr is doing by exposing a REST API and leveraging different WoT providers under the hood, currently Vertex and Relatr. In the case of Relatr, it is designed for CVM, which already provides all the primitives for authentication and other requirements for a solid user/service interaction
Ohh so, you don't build the graph for everybody? You just compute assertions for users my user is requesting or has requested? That could take some time to do it on the fly.. why not just compute everything?
I assume you need the graph to compute wot for others that follow me (follows of follows) but I might be misunderstanding something.
The way Relatr implements this is as follows: on the first server run, it takes the source public key defined in the config and the configured hops. It then starts scraping relays to obtain follow lists. Once all contact lists for the defined hops are scraped, it builds the social graph (currently using Martti Malmi's library). After that, it begins to compute the defined validations. Once this process is finished, the system is ready.
At this point, if someone requests the trust score for a public key not in the graph, Relatr fetches the contact list, performs validations, and returns the computed trust score. This new public key is then added to the social graph, and the validations are cached, so this is just done once. Relatr does not attempt to build a global graph, however, you can define as many hops as you want to get a more complete picture. Even in this case, there will be instances where new public keys appear, and you'll need to compute the trust for them.
I hope this clarifies the process
Interesting. It's weird to load a key that has no connection to the graph. Meaning that if it wasn't computed on start, then nobody in the extended user's follow (recursive) follows this key. Which means the score should be zero, no?
When you get a new key to compute, how do you find connections to the existing graph of users to appropriately score it?
Yes, it's weird, but there are still existing cases where this would happen, such as new users with new public keys that were not present when the social graph was created. To compute new keys, the process is to first get their contact list, add it to the graph, recalculate distances, and then, if the key is reachable because it has some connection in the graph, calculate the validations.
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed