Avatar
Michael
1fccce68f977187c91a7091ece205e214d436eeb8049bc72e266cf4f976d8f77

Suddenly, Start9 and Joinmarket are a thing that is booming for the foreseeable future. #privacy #bitcoin #selfhosting

Homestead and homeschool boom is upon us. #homesteading

Sounds about right.

Replying to Avatar JeffG

I’m really enjoying nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg's new top zaps feature.

It’s totally changed my behavior in just one day. I’ve found myself more than once today stopping to really think about what I want to write in the zap comments. 🤯

Great example of how a small product change can have an outsized impact.

how do you use that feature?

There have been a lot of ideas about dealing with unwanted content on nostr. I'm going to try to break it down in this post

Part 1: Keeping unwanted content off of relays

This is done for two reasons. The first is legal: you could get in trouble by hosting illegal content. The second is to try to curate a set of content that is within some bounds of acceptability: perhaps flooding is not allowed, or spam posts about shitcoins are not allowed, maybe even mean posts are not allowed. It's up to the relay operator.

Early on people talked about Proof of Work, and this was meant to limit how fast a flooder or spammer could saturate your relay with junk, and therefore how much junk a moderator would have to look through. I don't know of any relay that went in this direction, and I don't think it's a great solution.

Then we saw paid relays. Paid relays only accept posts from their customers. This is a very effective solution. Customers can still break the rules but you have a smaller set of people that can do that and there are consequences.

But the downside with paid relays is if they cannot be used as inboxes. Ideally a relay would also work as an inbox for notes tagging any of your paid customers. Unfortunately in that case those responses can be floods, spam, or other unwanted content. So the same problem comes back around.

In the end, I think in order to support people getting messages from anybody, relays would need to inspect content and make judgements about it. And this is going to need to be automated. Email servers almost all do spam filtering using bayesian filters. We probably should be doing the same or similar. Maybe AI can play a role.

Part 2: Keeping unwanted content out of your own feed

The first thing clients can do is leverage Part 1. That is, use relays that do some of the work for you. Clients can avoid pulling global feed posts or thread replies from relays that aren't known to be managing content to the user's satisfaction.

The primary tool here is mute. Personal mute lists are a must. The downsides are that (1) they are post-facto, and (2) they cannot control for harassment from people who really want to harass and just keep making up new keypairs to repeat the harassment.

We can fix the post-facto issue to a large degree by having community mute lists (some people may call this 'blocking' but I don't want to confuse with the Twitter feature that doesn't allow a person to see your posts). This is where people of like mind subscribe to and manage a community mute list, and so when someone is muted, everybody benefits from it, meaning most people in that community won't see the offending post.

That doesn't solve problem 2, however. For that we have even more restrictive solutions

The first is the web-of-trust model. You only accept posts from people that you follow or who they follow. This is highly effective, but may silence posts you would have wanted to see.

The second is even more restrictive: private group conversations.

Finally I will mention two additional related features: thread dismissal and content warnings.

That's it. GM nostr!

building a killer app. How do you run it over tor though?

Won't be a smooth ride but bitcoin will win. #bitcoin

How do you run gossip over tor? #asknostr

You've been spot on.