The sentiment of needing to differentiate is a critical element of what Nostr needs to succeed. There are lots of problems with existing platforms and therefore lots of opportunities to differentiate and innovate. Talk to people who still use them to find the opportunity spaces.

The community also needs to address two other aspects in order to see widespread adoption.

Based on the conversations two weeks ago there’s still work to do to make Nostr a cool place for everyone to hangout. There are some nascent components for avoiding content one does not want to see, but those still are not enough to protect against harassment primarily experienced by women on this network. If you want a man cave with a side of porn - cool then the status quo works, but if you want to see widespread adoption, then additional NIPs or work on existing NIPs is required to allow people to protect themselves from bad actors.

Bitcoin may be different than the other shitcoins, but the general public sees it all as crypto. If thats the only content they see here - they will pass. That’s why we launched the Creator Residency and Journalism Accelerator. It’s also an opportunity to test micropayments for content creation & journalism as well as a way to reframe Lightning.

A subset of people will also pass if the main selling point is say anything you want. This latter piece is bc most platforms show you content beyond your network today, so the assumption is if Nostr has free speech they will have to see everything. But that is not how it works (except for the aggregation feeds). If the community spends a bit more time talking about choosing your feed and a bit less about free speech - the adoption should shift.

In both instances branding matters, specifically how we talk about what already exists in Nostr. Updating how we talk about Nostr can go a long way to increasing adoption.

Reply to this note

Please Login to reply.

Discussion

There are two items that can help based on the discussion a couple weeks ago:

1. If someone is harassing a user it is not enough to mute that user bc they are still in the replies. In some cases the harasser’s friends are seeing the replies and piling on. However the person being harassed doesn’t know the harassment is continuing and then is blindsided by the additional harassment coming their way. Freedom from needs to extend to freedom from having someone in your replies continuing to harass you. The current model assumes Nostr is a level playing field that exists in a cultural vacuum and that is not the case. As a result those who experience harassment IRL also feel the brunt of it here. Direct harassment is different from saying whatever you want. It involves being in someone’s replies or mentioning them directly. People need the ability to say no to both of these.

2. The second issue women reported was being found by random jerks. This happens bc some apps have aggregator feeds. Users need the option to opt out of these.

I also think there’s more user research needed to understand if other problems exist.

Searchnos also handles NIP-09 event deletion and my deployment does not keep indexes for so long periods of time (30 days with the current configuration).

Jerks' daily job is to find targets, not sure if deindexing from aggregators would help much, you should then deindex from major public relays too, and only post your stuff on paid relays (paid to read, not only to write). Meaning, you shouldn't be on public Nostr if you're afraid to be found by someone determined to find you.

I think the only anti-harassment solution that could work on Nostr is client-side filtering, based on contact/mute lists, friends' reports, etc. Don't show replies from people you don't follow, or that were reported/muted many times by people you follow, or replies from public relays. I bet some of these policies are implemented an nos.social, but the issue is - everyone's using Damus/Amethyst/Primal, and those have nothing like that.

The way I see it, we should have a separate pluggable layer/API/NIP for content post-filtering, that can be plugged into any app: an app forms a feed (main/replies/notifs/anything) and then passes all the events from the feed to the filter, and filter returns various labels (spam/harassment/nsfw/impersonation/...), and app covers the content of the labeled event and shows labels above it. This way apps don't have to rebuild their feed building logic - just apply another layer above it, users would specify the filtering API endpoint in the settings and get the filtering they want. Safe mode could be 'cover notes from users I don't follow until filter returns it's labels - uncover if no bad labels returned', more reckless mode could be 'show notes first, only hide them if filter returns some bad labels'.

If nos or anyone is interested in experimenting with me in this area, let me know.

Amethyst has this, too. It used to be activated by default until people started forking because of this feature.

Sounds like some half decent logic 👍🏻