There have been a lot of ideas about dealing with unwanted content on nostr. I'm going to try to break it down in this post

Part 1: Keeping unwanted content off of relays

This is done for two reasons. The first is legal: you could get in trouble by hosting illegal content. The second is to try to curate a set of content that is within some bounds of acceptability: perhaps flooding is not allowed, or spam posts about shitcoins are not allowed, maybe even mean posts are not allowed. It's up to the relay operator.

Early on people talked about Proof of Work, and this was meant to limit how fast a flooder or spammer could saturate your relay with junk, and therefore how much junk a moderator would have to look through. I don't know of any relay that went in this direction, and I don't think it's a great solution.

Then we saw paid relays. Paid relays only accept posts from their customers. This is a very effective solution. Customers can still break the rules but you have a smaller set of people that can do that and there are consequences.

But the downside with paid relays is if they cannot be used as inboxes. Ideally a relay would also work as an inbox for notes tagging any of your paid customers. Unfortunately in that case those responses can be floods, spam, or other unwanted content. So the same problem comes back around.

In the end, I think in order to support people getting messages from anybody, relays would need to inspect content and make judgements about it. And this is going to need to be automated. Email servers almost all do spam filtering using bayesian filters. We probably should be doing the same or similar. Maybe AI can play a role.

Part 2: Keeping unwanted content out of your own feed

The first thing clients can do is leverage Part 1. That is, use relays that do some of the work for you. Clients can avoid pulling global feed posts or thread replies from relays that aren't known to be managing content to the user's satisfaction.

The primary tool here is mute. Personal mute lists are a must. The downsides are that (1) they are post-facto, and (2) they cannot control for harassment from people who really want to harass and just keep making up new keypairs to repeat the harassment.

We can fix the post-facto issue to a large degree by having community mute lists (some people may call this 'blocking' but I don't want to confuse with the Twitter feature that doesn't allow a person to see your posts). This is where people of like mind subscribe to and manage a community mute list, and so when someone is muted, everybody benefits from it, meaning most people in that community won't see the offending post.

That doesn't solve problem 2, however. For that we have even more restrictive solutions

The first is the web-of-trust model. You only accept posts from people that you follow or who they follow. This is highly effective, but may silence posts you would have wanted to see.

The second is even more restrictive: private group conversations.

Finally I will mention two additional related features: thread dismissal and content warnings.

That's it. GM nostr!

Reply to this note

Please Login to reply.

Discussion

building a killer app. How do you run it over tor though?

I don't think it is smart to use tor as a proxy (e.g. 9050 or whatever) on a regular system. It is too easy for things to bypass tor (e.g. DNS lookups). Therefore I recommend doing it under whonix. In that case, everything goes over tor. Very few relays are .onion sites but tor has exit nodes so it should work fine. Gossip doesn't have any tor-specific code. You just run it from behind tor same as you run any other program behind tor.

makes sense. thanks

I really appreciate that explanation, thanks

Relays have the right to apply any strategy.. they can ask for money , they can spam filter or whatever they deem fit for the content stored on their servers .. they should even sell the intent ( not content) to LLMs for training purpose as long as they declare upfront .. rich relays are the back bone of #nostr

I wouldn't subscribe to a community mute list unless the community was extremely small and I had a high degree of trust for those in it (risk being: some mute-happy dingus keeps me from seeing stuff I'd like to see in a way I can't get around in that community). But what I'm describing here is just a worse version of WoT.

WoT solves the problem much better, with some modifications:

- don't limit the web to 2nd order connections. Keep it going much, much further.

- "trust" in said web needs contextual tags. I might trust you highly for your opinions on Rust (and want to see your + your Rust connections notes in that context) but trust you less on "Mute" and may want to filter you and your "mute" connections out at will.

- use clients to filter your feed based on the trust contexts. "show me everyone; show me Rust + Bitcoin; show everyone but filter out Mute", etc)

Trust models that treat humans as a monolith are an anti-pattern. My human connections are each a complex of contexts and I almost never have a binary feeling about a person as a whole.

The web of connections is modeled as a flow network, where "trustiness" (along a context/tag) flows back to you from each network node according to how "open" the pipes are between you. ie "how much do I trust a, how much does a trust b, how much does b trust n, etc." where your first order connections matter most to you and the flow beyond them is determined by the subsequent hops. Someone you trust highly on Rust who trusts someone else highly on Rust - this last person you can trust a lot because of your connections in between.

And if you and that final connection have a very different trust relationship for a different context, you'll view them accordingly for that context (importantly: separately from how you trust them on Rust).

I have worked on a similar system in a different stack (urbit. The project was called "Area"), here is a brief overview of how that worked: https://gist.github.com/vcavallo/e008ed60968e9b5c08a9650c712f63bd

It's worth mentioning: you can still get your functionality with my proposal: have a single "filter" node that you just give 100% trust to on [all topics]. Then that node "curates" literally everyone for you.

This gives you the same final result as a central filter.

Or: assign high trust on [all topics] to the community curator(s). That gives you the "community mute list" final result. (Because these community curators would be handing out "mutes" in the form of negative infinity trust to "mutees")

Last point: curation is an art and is not easy. There's no reason people couldn't charge for this service and have their "assign x% of your trust to me for a fee". That's _essentially_ what people are doing when they subscribe to the NYT and expect it to handle what they should see and think about.

No loss of functionality for all those involved, from the most granular and subjective, to the most hive-minded

I wouldn't subscribe to any community mute list unless or until we get flooded with spam bots. Because I'm pretty tough and not bothered by hateful content. But some people want different things that I do.

WoT excludes people who are freshly joining nostr. A lot o people want to greet them.

It's not uncommon for clients to have a "default follows" initial state. I think this is weird and gross, but it's also a proven tactic in decentralized social media.

Warpcast, for instance, has you following like 80 people when you install it, and that community is known for having good solutions to cold start and scaling hurdles

if you can't post illegal content on NOSTR then why do i need nostr ?

i can already post legal content elsewhere.