Could nostr be used to solve fake reviews spam?

I was just thinking about this a bit and think maybe it could?

Right now, every platform has an issue with fake reviews. No matter what they say, fake reviews are leaking into all sorts of platforms - predominantly into Amazon.

New platforms based on Nostr could implement Web of Trust mechanisms where npubs are used to calculate a trust score (an aggregate score of all the pubs who commented on a product / service, whatever it may be). This score could be used to show confidence levels in the notes.

The platform implementing this could offer different criteria for confidence levels (so the user can always be in control of which criteria matters to them).

For example, the user could select “aged npubs with x number of followers” or only npubs who post on this platform frequently, or any combination of any other factors (following, network of people I follow), that may indicate a quality response (perhaps an image of the product).

Obviously, the spam will continue, but at least the user will be armed with ways of differentiating which npubs may be spam and which aren’t.

Thoughts?

Reply to this note

Please Login to reply.

Discussion

Agree with you 💯

this approach would be a great way not only for reviews. i think it could prevent other spam posts as well

Some Traditional platforms also use web of trust. The problem is not all have payment history. For example Yelp has no payments.

But on Nostr we can implement 2 scores. A global score and a personal score based on user setting only. Even better is to use LLM to explain the global score by reading all comments.

If nostr implements 2 scores like you mention (global and personal), the personal score will eventually eclipse the global score in power and importance. It’s something that can only be implemented properly in a decentralized system like nostr. Centralized systems are simply unable to provide personal scores for various reasons, which is why we’ve never seen their power unleashed.

A global score is nothing different than a particular personal score that is centered around whoever represents the community. eg when Google calculates PageRank scores for websites, it’s basically a system of “personal” scores, except that the only score we ever see is Larry Page’s (Google’s) “personal” score, and none of the rest of us ever get to see our personal scores. So the global score = Larry’s personal score. And in a broader sense, “global” score always = the “personal” score of an individual or entity.

I wrote years ago about personal vs global scores here:

https://github.com/WebOfTrustInfo/rwot1-sf/blob/master/Principle-of-Relativity-for-WoT.md

Very insightful take!

I think that’s what slashtags were supposed to solve. I think it’s doable here.

It seems as though it would be extremely computationally expensive to perform a web of trust score calculation for every attempted review post. Other than that one potential tradeoff, I don’t see why this couldn’t work beautifully.

I think you’re correct about it being computationally expensive. Strategies will have to be implemented to optimize the calculations. A challenging problem, but no reason for it to be an intractable one.

Yeah definitely not. Caching could probably help mitigate it.

Yup. And only do the calculations that are most relevant. Your web of trust will help you to know which sources of information to include and exclude, which will help you to avoid wasting resources on low-yield calculations.

btw I have a feeling from your banner pic nostr:npub1s6ka82ar3g9tswkqhrwyf3j50eq7uttdnzplp9k8r59su8adxfnsjafz5h that we just might agree on the #1 and #2 movies of all time 😅

Maybe making it more simple we can look at how people take non-anonymous recommendations and that is through friends and friends of friends.

Maybe a less computational way is to weigh follows higher, follows of follows lower and so on.

At some people there may be spam at n degrees of separation but that might be weighed as negligible.

This method does have its own problems but may be a good starting point.

aged npub? Can’t I just edit the profile metadata or the timestamp on a kind 1?

I don’t know. Throwing ideas out there so people can pick them apart or find something that may work.

A social credit score by any other name?

I’m not sure how you derive that from an npub considering you know nothing about who is behind it in many cases. Aggregate web of trust score.

Web of trust based reputation, whether it’s called “social credit scores” or something else, is necessary and unavoidable. Just like money, it becomes dystopian when it’s controlled centrally, but not when it’s decentralized.

btc != the dystopia of fiat, and likewise: decentralized reputation != the dystopia of centralized reputation.

The challenge, therefore, is how to decentralize it.

nostr:note1pcgt9cvq2a7ndva9lzzrcfssfasypzcsnz20jy4zqmatnmg88z4qs56lnh

One could also utilize the power of AI, having it read the content of an npub, it's notes, replies, and other actions over time, to determine the likelyhood of an npub being real or a bot/spam

I don’t have confidence in that.

I’ve tried existing “fake content” detectors and they failed to flag my generated content as fake 🤷‍♂️

Won't work.

Fake reviews are too valuable and going around this too cheap.

Challenge accepted 🫡

I would love to see this experiment and I'm often wrong about these things.

Couldn't this be solved by introducing a financial (dis)incentive system?

I could imagine a system where you pay very little for casting a balanced review and you pay extra for casting a very positive or negative review. Cost per review would have to be adjusted according to product price. (perhaps a percantage of average market price?)

This introduces cost to everyone and increasing cost to both fake positive teviews and negative review bombing thus making it less likely. This cost for reviewers could be mitigated via zaps or zap-like alternatives for actually helpful reviews, which, if actually helpful could even be profitable for the reviewer.

If needed it could be coupled with (some parts of) reputation systems like the one on Stack Overflow.

+ put down some sats?

where do they go

Web of trust will do a better job than the legacy systems. I’ve built a proof of concept to show one strategy to build it. UI needs work, but it’s functional.

https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/exampleListCuration.md

Tangential issue is: How to incentivize the real reviews?

Readers zapping sats to valuable reviews, sure. But would it be enough?

I agree. Incentives are important. I imagine a system where people charge sats for access to their reviews, and you’ll be willing to pay them if the reviewer is trusted above some threshold by your web of trust. That will incentivize people to provide useful reviews, so they’re trusted more, so they get paid more.

The freenet has a WoT implementation. I have read their Whitepaper and found it interesting. Yes, I agree Nostr would benefit from something similar. I trust we'll get there...

https://github.com/hyphanet/plugin-WebOfTrust

Also, check out nostr.band's "trust rank"

https://trust.nostr.band/