Filtering spam is a useful service. Many Nostr threads are already overrun by shitcoin airdrop scams for example. IMO, this problem gets worse as Nostr grows and simple mute lists are not enough because spammers can easily create new accounts for each new spam message. Primal's current capabilities in dealing with this are far from perfect. We may miss some spam, we may get false positives. Some users might disagree with us on the definition of spam. That's why we are building the tools to give more control and the final word to the user.

I am serious when I say that competent content filtering will be a key feature that Nostr clients will compete in. There will be clients that don't do any filtering whatsoever (or make filtering an opt in instead of opt out), and that will attract users who want that. We don't claim to have the final answer to any of this. We are simply working on providing the best service we can and doing so in a transparent manner (our entire stack is FOSS). We really hope that others will stand up services with different rules so that we can all learn from each other and adjust.

Reply to this note

Please Login to reply.

Discussion

What do you think about an option for users to charge for commenting on their notes?

A very small fee might prevent spam without deterring genuine engagement.

"We need to protect people from

harmless online spam

.

.

.

.

harmfull misinformation

.

.

.

.

hate speech"

Agreed. I'm new to the tech (and a no-coder) but I believe there is potential for:

1. Each client to run its own unique filtering. nostr:npub16c0nh3dnadzqpm76uctf5hqhe2lny344zsmpm6feee9p5rdxaa9q586nvr if the filters were announced upfront as a policy people might be more receptive.

2. Each relay to be drawing in a self-selecting audience.

I foresee small relays as Mastadon planets, and users can choose their favorite spaceship-clients to get there.

nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z #amethyst #bugs opening this note crashes amethyst nostr:nevent1qqsxfv97rhcxx5hfh06t7crrwy0uzdac3v63u3aasechs2k9lnuen5spzpmhxue69uhkummnw3ezuamfdejsygxkruautvltgsqwlkhxz6d9c972hueyddg5xcw7jwwwfgdqmfh0fgpsgqqqqqqs9267km

How “give more control and the final word to the user” becomes “I’ll tell you what is spam”?

Miljan, brother, would it be a problem for you to explain why all of these individuals are removed from being able to trend? I'm sure you have good reasons and I'm also sure this is a pain in the ass, but I feel transparency here is best. As Bitcoiners and Nostriches, we love freedom. You know this well. I also know that you're trying to build a top tier experience on Nostr and filtering spam is key in part of obtaining that goal. It's a hard balance. I get this! But I also believe to be the best, you have to sometimes do a bit more explaining than you thought was needed. We'd all like to know why these people were signaled out. It doesn't have to be overly in depth. Just some clarification. Thanks.

This is a very sensible position! This is what nostr is all about. If people truly want absolutely 0 filtering in their client they will find and use a client that allows for this.

Y'all building Nostr clients have crossed the point from all competing to add each new feature one client comes up with, to beginning to separate based upon sets of principles/features you believe will compete in the marketplace. We knew this would come, and I think is largely a good thing.

Will be fascinating to watch the debate unfold on whether the Nostr protocol itself should have the certain values/principles that devs build to, or whether values should play out at the client/relay level.

I for one am encouraged we have an ecosystem in which we can debate this.

nostr:note1vjctu80svdfwnwl5hasxxuglcymm3ze4rermmpn30q4vtl8en8fqyxmptd

I’ll start this by saying I’m not a developer so I don’t truly know why someone hasn’t implemented saylors orange check yet but IMO it’s an easy way to stop spam that is unbiased.

People won’t be able to pay to trend, (spamming zaps) too many times because they will eventually run out of money. Sure it’s early so it’s relatively cheap to do for now but I don’t see this being the end case.

Regardless people are not upset by you trying to stop spam, they are upset by the lack of transparency or perceived lack of transparency. If you had a primal feed where people had to put up some sats to be displayed I’m sure they would use it. Then have your non filtered feed where anything goes. Let people pick for themselves and they will be happy. Pick for them and they will leave your platform. Specially anyone that’s here now, the current nostr users are going to be even more adverse to any kind of filtering than future users will be so as a developer you must be particularly sensitive to perceptions around censorship IMO.

I actually like the approach Twitter introduced a while back, where they have a section at the bottom of threads where posts/users who are ranked as low quality are placed.

This way, you effectively have a section that says “most likely garbage past this point”, and it’s up to the user if they want to expand it and look through it. For the vast majority, it buries crap so that by default, they don’t need to see it.

The important part is how that “ranked low quality” part is done. I honestly think this could be something as simple as basing it on the net upvote/downvote ratio of a users posts. If they get downvoted more often than upvoted, their posts/replies go into the “garbage content likely” section and they don’t show up in notifications.

nostr:nevent1qqsxfv97rhcxx5hfh06t7crrwy0uzdac3v63u3aasechs2k9lnuen5spk9mhxue69uhkummnw3ezumt0d5k8wumn8ghj7un9d3shjtnddaehgu3wwp6kytrhwden5te0danxvcmgv95kutnsw43zcamnwvaz7tmwdaejumr0dsk8wumn8ghj7mn0wd68yv339e3k7mfvwaehxw309aex2mrp0yhxgctdw4eju6t093mhxue69uhhyetvv9ujuurvv438xarj9e3k7mfvwaehxw309aex2mrp0yhxummnw3ezucn893mhxue69uhhyetvv9ujumn0wd68ytnzv9hxg9r853r

there is a "simple" solution for all spam and sybill attacks:

numerical scoring. it's dimensions more accurate than likes or zaps.

when everyone can manually score the relevancy of posts and keys (e.g. from -10 to +10),

and we then can filter to only see posts by keys who we are scoring high ourselfes or that have similar scoring behaviour as we do.

this way scammers never have the chance to become "insiders" of high value networks, because any "insider" who would give high scores to scammers, will be downscored very fast.

this is maybe the only true decentralized moderation that can exists, because it mimicks tribal behaviour and scales it to the whole planet.

I have regularly thought trough all kinds of attack vectors for 3 years now, so let me know if you are interested in building that, I might be able to help🙏