So I finally watched the Nostrica presentation by rabble and must say it was awesome! Not only the fun Twitter history part, but everything he said about content moderation needing to work for scale to happen. Couldn’t help but nod the entire time.

Also understand that some people are uneasy with content moderation given that this is Nostr and all, but we have a choice to make, keep it niche, or appease the App Store overlords for mass adoption.

Also, at the relay level you’d still have plenty of choice.

Reply to this note

Please Login to reply.

Discussion

That's the rub though isn't it? I'm going to be very interested in seeing how that goes.

I personally have no interest in Nostr if it becomes anything remotely like existing social media. Sane defaults for new users are one thing, but the majority of moderation needs to be done by individual users.

The moment it becomes possible to effectively block someone from being discoverable, Nostr ceases to have a legitimate reason to exist imo. I'm hoping smarter people than be will be able to walk that line, and navigate through it.

I don’t think anyone can argue that some content should never be shown in a public app accessible by anyone and that it shouldn’t be up to the user to opt out. It just shouldn’t show in the first place. Pretty much every nation on earth (probably) has laws for that.

I think it’ll be possible to block the content but not the user, but have to remember at the end of the day the relay is responsible for content hosting and they will be held accountable to law. It’s up to the relay operator if they want to serve jail time or not..

People harming other people is (and should be) illegal. But I'm not talking about that, and I hope to be understood.

There is a vast chasm between blocking, reporting, and prosecuting child porn, and silo'ing a group of because of wrong think.

And that's the whole issue. The whole rub lays right there. Protect the children is a cry that no sane person would argue against. But it never stops there. It becomes "well what about X hate group," then "what about Y group that disagrees with us," etc.

It's in who makes the choices on what I see in my feed. Freedom from harm so often gets reinterpreted as freedom from offense that I get real nervous during discussions on moderation. It's not moderating harmful content that I have the issue with, it's about by who and how harmful is defined.

Relay operator makes the choice. Someone is always making a choice for you, whether you realize it or not.

Don’t worry you can still use unsavory clients to connect to unsavory content and see everything unfiltered. I have no doubt that will be possible. It’s just not going to fly on Damus, Amethyst or any other client easily downloadable from one of the app platforms.

I am highly doubtful that there is a good way to content Moderation other than letting users filter themselves.

And I’ve never heard any sound argument why it is necessary that a client should have the filters built in.

Maybe a default normie filter that can be replaced by a custom filter is the way to go?

That's what I meant by sane defaults, personally.

I already don't see any worse in my global feed than I see on most social media, but my global is limited to three paid relays, one of which I operate.

Relay operators do have a choice, and they should exercise it. As long as users are able to add and remove relays easily, that seems like user choice and power. Good relay operators will be rewarded, bad ones may find themselves on jail. Ultimately the user's choice whether to use them.

But, that's no different than what we already have. Ultimately this is going to be a question of Clients needing to run their own moderated relay as a default for store guidelines, no different from how fedi clients have done.

It's a slippery slope though. When I started a few weeks back, I was naive enough to think I could just block all the Chinese porn bots. Quickly realized it was never gonna end. There has to be a way to filter content. What happens to Nostr if someone starts blasting non stop gore and child porn? Are the relay Admins liable?

Yes and the client too (at least in the eyes of the App Store).

There has to be a moderation mechanism of some type on both ends I think.

This might be an api service of some type perhaps that relays rely on.

is it on YouTube