I’d love to see someone make a client that runs #relatr, let people sign up (perhaps for a small fee), calculate personalized trust metrics including “rank” and “followers” (meaning: verified followers) and publish them using nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z's Trusted Assertions. This would greatly facilitate integration of relatr scores into clients for nostr:npub1healthsx3swcgtknff7zwpg8aj2q7h49zecul5rz490f6z2zp59qnfvp8p ‘s upcoming WoT hackathon!

#wotathon

nostr:npub1dvmcpmefwtnn6dctsj3728n64xhrf06p9yude77echmrkgs5zmyqw33jdm

nostr:npub17plqkxhsv66g8quxxc9p5t9mxazzn20m426exqnl8lxnh5a4cdns7jezx0

nostr:npub1qe3e5wrvnsgpggtkytxteaqfprz0rgxr8c3l34kk3a9t7e2l3acslezefe

https://nostr.at/naddr1qvzqqqr4gupzq6ehsrhjjuh885mshp9ru50842dwxjl5z2fcmnaan30k8v3pg9kgqq2hjjt5vd45x6msd428ztt0wazn2st8t968zyx8g23

https://straycat.brainstorm.social/about-trusted-assertions.html

Reply to this note

Please Login to reply.

Discussion

This is def an intriguing experiment. If one trust calculation service becomes dominant, you’ve effectively recreated Twitter’s recommendation algorithm problem, just with extra steps though.

Is this where we want to go? I understand we want to solve real problems….spam, impersonation, and signal-to-noise ratio issues that plague open protocols but…? 🤔

Ideally, every nostr user would run open source software locally that calculates personalized trust metrics. This is why brainstorm is open source: so you can run your own instance and calculate your own personalized trust metrics. No need to trust a third party if you’re willing to put in a little effort.

Realistically, not everyone is a developer, and most users will want someone else to do it for them. Just like most nostr users don’t run personalized relays, and most Bitcoin users don’t run their own full nodes. It doesn’t mean the entire endeavor is a failure. A small fraction of people who actually run their own Bitcoin nodes, with almost everyone having the ability to do so (not just theoretical but practical), is better than the status quo where basically 0% of people have the ability to audit the fiat system, whether in theory or in practice.

So how do we maximize the number of people who actually do calculate their own personalized trust metrics? Answer: we will have to make it as easy and user friendly as possible. And did I say open source? It needs to be open source. The lower the barrier to entry, the healthier the ecosystem.

Ok, I get can get on board with where you’re going. This makes. Ore sense to me.

Hey! Yes, that could be a great idea. But there are some details that we should consider when evaluating this feature. The interaction with relatr is based on using JSON-RPC as defined in the ContextVM spec, similar to dvms, where the scores are computed at query time, and there is no requirement for persistence as nostr events. The idea behind this is to have freshly computed scores and control of the cache instead of relying on replaceable events that can become stale. This approach also avoids being affected by nostr broadcasting/propagation quirks, such as inconsistency in the last state of the published events where different relays might have different versions of the events. But yes, we could enhance this by publishing trusted assertions in some way. We have to think about what would be the best way to do this

I think we are ultimately going to need multiple ways to deliver personalized trust metrics from service providers to clients that use them. Each method will have its own set of tradeoffs.

One question I have of the WoT DVM approach: how many requests can one server handle per second? Suppose I am on Amethyst and I’m scrolling a content feed. Each kind 1 event will need scores for the event author plus the authors of each reaction (so I can screen out the spam). And if I’m scrolling quickly, that could be a lot of rapid fire DVM requests.

Perhaps a hybrid approach would be to publish scores as Trusted Assertions and also support a DVM-like method to trigger updates of selected Trusted Assertions.

Personalized trust metrics definitely fits into the “no solutions, only tradeoffs” category. If every metric is always updated at request time and never stale, we limit not only number of requests per unit time but also the complexity of scores. Suppose I want to generate a baseline “real user, not a bot” score using follows mutes, reports, zaps, and content interactions. I want to use that to curate a list of nostr devs. And I want the list of nostr devs to curate a list of NIPs. That is a daisy-chain of 3 scores. Suppose I want to chain together more than 3 scores using a complex configuration? It becomes impractical to recalculate every score from scratch every time it is used. Somewhere along the way, personalized trust metrics will have to be cached, with the frequency of updates based on priority and availability of resources.

Hey! I'm going to try to answer your questions here.

Regarding the question about how many requests a server can handle per second, there is no one-size-fits-all answer. It depends on the underlying server and how it handles requests. The CVM transport is not the bottleneck here. This means that the server's ability to handle requests depends on how it is designed and whether it is optimized to handle a large number of requests per second.

On the other hand, regarding the question of stale computations, it is indeed impractical to recalculate all metrics all the time. In the case of Relatr, we cache these computations and set a TTL so that all computations are only done once and invalidated once the TTL expires. In the example of rapidly scrolling through a feed, there should be no problem handling it. However, it is not the most ideal use case for the current architecture of Relatr, as it implies sending a request and receiving a response. If the events are already published on relays, you just need to craft a specific filter for them and fetch them.

As you mentioned, there are no perfect solutions, and we are still refining Relatr. We will definitely keep trusted assertions in mind to see how we can integrate them into our current model

What our your thoughts on monetization?

Monetization should be up to the provider. We currently run the default Relatr instance for free, and we will continue to do so for the time being as we are still building the service. Once it is more mature, we can consider different ways to monetize it. On the other hand, monetization is already possible using ContextVM and it will be better integrated once CEP-8 is completed. https://github.com/ContextVM/contextvm-docs/issues/8

What are your thoughts on monetization?

The first question to ask is: who’s the customer? Who pays to get scores calculated? There will be many models, but to me the most straightforward model is for the end user to be the customer. Alice subscribes to a service which calculates her personalized trust metrics and makes them easily, readily available to clients.

Another model is for the clients to be the customer. Maybe it will work but I do wonder how many clients will want to pay for this. Most of them will be on a shoestring budget and if they’re not, they’ll just calculate scores internally. But as a user, I don’t want my personalized scores to look different whenever I change clients. For example, I want to select an algo for verified follower count and see the same counts wherever I go.

Further considering the pull vs. request/response (req/res) models, it is true that using already published trust assertion data can be convenient and easy to query with appropriate filters. However, one issue is that not all profiles are scored. For instance, a service provider might start publishing a large number of trust assertions, but the set will never be fully complete because it is dynamic, and it's impossible to anticipate which users need to be scored. Consider the example of a nostr feed we discussed: each profile should be complemented with trusted assertions. But what if one of the profiles in the feed doesn't have any trust assertion attached? In that case, the only solution is to request the trust computation from the service provider.

We believe this might be the right balance. Relatr could publish trust assertion events for profiles that have already been computed and still rely on the req/res flow. In this scenario, a client could operate by first fetching already published trust assertions, and if a profile lacks one, request it. The advantage of this approach is that it can be perfectly integrated with the current Relatr model, as for each request, an event can be published. This way, client developers can choose to just fetch, fetch and request, or just request trust assertions.

I like that a lot. It’s the best of both worlds and lets the clients (and users) decide which services to integrate and potentially to pay for.

Can we already signup? ;)

We will get back to you with feedback / questions / answers etc!

A little birdie tells me you should put Nov 20 on your calendar for the first WoTathon community call. More details soon!