Personalized trust metrics definitely fits into the “no solutions, only tradeoffs” category. If every metric is always updated at request time and never stale, we limit not only number of requests per unit time but also the complexity of scores. Suppose I want to generate a baseline “real user, not a bot” score using follows mutes, reports, zaps, and content interactions. I want to use that to curate a list of nostr devs. And I want the list of nostr devs to curate a list of NIPs. That is a daisy-chain of 3 scores. Suppose I want to chain together more than 3 scores using a complex configuration? It becomes impractical to recalculate every score from scratch every time it is used. Somewhere along the way, personalized trust metrics will have to be cached, with the frequency of updates based on priority and availability of resources.
Discussion
Hey! I'm going to try to answer your questions here.
Regarding the question about how many requests a server can handle per second, there is no one-size-fits-all answer. It depends on the underlying server and how it handles requests. The CVM transport is not the bottleneck here. This means that the server's ability to handle requests depends on how it is designed and whether it is optimized to handle a large number of requests per second.
On the other hand, regarding the question of stale computations, it is indeed impractical to recalculate all metrics all the time. In the case of Relatr, we cache these computations and set a TTL so that all computations are only done once and invalidated once the TTL expires. In the example of rapidly scrolling through a feed, there should be no problem handling it. However, it is not the most ideal use case for the current architecture of Relatr, as it implies sending a request and receiving a response. If the events are already published on relays, you just need to craft a specific filter for them and fetch them.
As you mentioned, there are no perfect solutions, and we are still refining Relatr. We will definitely keep trusted assertions in mind to see how we can integrate them into our current model
What our your thoughts on monetization?
Monetization should be up to the provider. We currently run the default Relatr instance for free, and we will continue to do so for the time being as we are still building the service. Once it is more mature, we can consider different ways to monetize it. On the other hand, monetization is already possible using ContextVM and it will be better integrated once CEP-8 is completed. https://github.com/ContextVM/contextvm-docs/issues/8
What are your thoughts on monetization?
The first question to ask is: who’s the customer? Who pays to get scores calculated? There will be many models, but to me the most straightforward model is for the end user to be the customer. Alice subscribes to a service which calculates her personalized trust metrics and makes them easily, readily available to clients.
Another model is for the clients to be the customer. Maybe it will work but I do wonder how many clients will want to pay for this. Most of them will be on a shoestring budget and if they’re not, they’ll just calculate scores internally. But as a user, I don’t want my personalized scores to look different whenever I change clients. For example, I want to select an algo for verified follower count and see the same counts wherever I go.
Further considering the pull vs. request/response (req/res) models, it is true that using already published trust assertion data can be convenient and easy to query with appropriate filters. However, one issue is that not all profiles are scored. For instance, a service provider might start publishing a large number of trust assertions, but the set will never be fully complete because it is dynamic, and it's impossible to anticipate which users need to be scored. Consider the example of a nostr feed we discussed: each profile should be complemented with trusted assertions. But what if one of the profiles in the feed doesn't have any trust assertion attached? In that case, the only solution is to request the trust computation from the service provider.
We believe this might be the right balance. Relatr could publish trust assertion events for profiles that have already been computed and still rely on the req/res flow. In this scenario, a client could operate by first fetching already published trust assertions, and if a profile lacks one, request it. The advantage of this approach is that it can be perfectly integrated with the current Relatr model, as for each request, an event can be published. This way, client developers can choose to just fetch, fetch and request, or just request trust assertions.
I like that a lot. It’s the best of both worlds and lets the clients (and users) decide which services to integrate and potentially to pay for.