engagement buying will be a constant problem

webs of trust / verifiable reputations are probably the best way to mitigate it

Reply to this note

Please Login to reply.

Discussion

I worried about the fake accounts I’ve been seeing and reporting too, just saw a fake Ross account. It was trying to collect zaps by reposting what he had said. Sad.

yup, there will be many bots and impersonators

Hi. I'm actually the literal Satoshi...

And I had a dream you zapped me 4 sats.

Will my dream come true? πŸ€ŸπŸ™πŸ˜πŸ˜œπŸ˜

Whaddap bro? Your shit is πŸ”₯πŸ”₯πŸ”₯πŸ™

You might wanna get some in case it catches on...

freedom protocols build scalable trust

nostr:npub1u5njm6g5h5cpw4wy8xugu62e5s7f6fnysv0sj0z3a8rengt2zqhsxrldq3 πŸ‘†πŸ»

Is that maybe just a for of payed Advertising, but I guess you get the money back in your own wallet, so probably not. And you can rince and repeat.

Nostr is just like the real world , people will naturally gravitate toward signals they can trust.

So you are saying most people will gravitate toward X?

I can’t speak for others, but I always move toward more freedom. X is a platform. Nostr is a protocol. X isn’t leveraging the freedom Nostr enables , but it could.

People buying engagement are verifying their bad reputation.

🎯

And hopefully, the people in my grapevine who are paying attention will be able and willing to flag the bad actors as such. (This would be an example of what I mean by explicit contextual trust attestations.)

How would you flag something like that? So we need a report option for "paid shill"?

You could use a NIP-56, kind 1984 event to report a pubkey using β€œpaid shill” as a custom report type.

Your personalized WoT relay could gather all reports with this type (or similar types), discarding reports authored by pubkeys not in your WoT to mitigate gaming the system. If paid shills becomes a problem that people actually care about, they can start using this system.

https://github.com/nostr-protocol/nips/blob/master/56.md

Hmmmm

#asknostr

πŸ’―% agree

This is why we need explicit contextual trust attestations IN ADDITION TO proxy indicators of trust. If someone I trust flags someone else as doing specifically the bad thing you mention, I’ll know not to trust that person in that context.

Verifiable reputations? Sure. Might help a bit.

But it's tinkering at the edges.

The core issue: we're optimizing for the wrong things online. Engagement, popularity, scale. Often hollow.

Forget that. It's a distraction.

Focus on depth. On authenticity. On being genuinely useful or interesting.

Stop measuring breadth. Start noticing substance.

Build things for the people who appreciate real. Reward that.

Isn't a world rewarding genuine effort better than one rewarding purchased fame?

Perhaps engagement will not be ultimate incentive in NOSTR interactions.

Can we talk more about this please? πŸ™