I’m open to ideas.
But I imagine you understand that we can’t just have a free for all at scale. There are governments that will take down relays if they’re sources of child porn and copyrighted material. You don’t need to agree that that material is illegal or bad, just at least recognise that states will use it as an excuse to try to destroy Nostr.
This proposal is meant to be the most freedom and decentralization maximising way to handle the problem. It’s entirely opt in.
In a world of many small relays there will be enough choice of relays and the blocklists that they use to choose a regime you agree with. Even if that regime is a free for all.
I’m trying to preserve our right to use Nostr when it scales and this kind of content will be something that threatens that future.
Thanks that was super helpful
Anything written? Or should I use a DVM to transcribe one and make a highlight of it? (lol)
Anyone know if nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft or anyone else wrote up an intro to what Data Vending Machines are? I have a basic idea but want to confirm my understanding.
This seems to follow the same principles we’ve been talking about for Nos social. The major difference is we are leaning into NIP-32 labels for “passing judgement” on users and content rather than lists, which I think is more expressive and will scale better. For instance you can have client-side rules that say things like “if 1 of my friends reported this person put a content warning on their stuff, if 3 reported them don’t show them at all”. We’re about to start building a web app that allows relay owners (or any user) to view content reports take action on them.
I think shared blocklists are still really useful in the short term. This is the main way Matrix fights spam to this day and it’s still working for them at their scale I believe. It would be nice if we could leverage the existing mute lists (kind 10,000) somehow, as there is already a lot of valuable moderation-type data sitting in those. I would be careful about using the word "block" because it implies that the other user can't see your content if you've blocked them (which is true on most social platforms today but not Nostr).
I wrote some more about our vision for moderation here: naddr1qqxnzd3cxsurvd3jxyungdesqgsq7gkqd6kpqqngfm7vdr6ks4qwsdpdzcya2z9u6scjcquwvx203dsrqsqqqa282c44pk / https://habla.news/a/naddr1qqxnzd3cxsurvd3jxyungdesqgsq7gkqd6kpqqngfm7vdr6ks4qwsdpdzcya2z9u6scjcquwvx203dsrqsqqqa282c44pk
Hell yeah I’ll look more into it. I think y’all might be right about the architecture.
You’re totally right, I’m gonna keep digging because there’s some combo here that I think could be the foundation for the solution.
You’re so right.
We need more paid relays, but I think over time free relays will die off naturally because it’s unsustainable. On top of that we need more robust relay software including the ability to manage one’s own filters. It’s where I want to spend time.
For now though, I think I’ll build a blocklist manager that just allows someone to manage and publish NIP 51 lists as a proof of concept for aiding people trying to moderate their own feed.
Regarding blocking npubs not being enough, I agree. I think having blocklists include hashtags would help as well since a one npub <> one note setup would make it hard for unsavoury content to be distributed without communication off Nostr…or a hashtag.
I think that will be a good start. It’ll evolve if it solves problems people care about.
Great frame. I may edit the proposal to phrase it that way.
Agreed. We have to earn the right to solve the problems of scale but they are going to arrive quickly and it’s best be prepared!
Agreed there are challenges to this architecture.
As part of the proposal I talk about how these blocklist will likely need to be paid for. At scale small subscriptions can fund moderate amount of ongoing labor to maintain them.
People will only pay if they find value which is how people will vote on how they want their own feeds managed.
Definitely need this to be self sustaining for it to work and not add to centralisation
Agreed! That future would be supported by this proposal. Communities would be able to choose for themselves what kind of content they want filtered out. If any!
That’s a good point I’ll keep digging into that.
For relays I think the advantage of subscribing to lists is they have more tools to not get taken down by law enforcement nits not about deciding what users should see, it’s just about being defensive against the statists until the statists can be overpowered by freedom tech.
And core to this is user choice. Users should choose relays that manage content the way they agree with (or need to because of local laws). Including no management of content at all!
🫂 no rush! I’m learning a lot from the conversations happening already 😊
I think I’ll want to reframe the proposal’s title to help make it clear this isn’t about implementing censorship.
Yes. I think we actually agree with each other and I’m not sure if I’ve communicated that very well.
It’s not about eliminating unsavoury content. That’s impossible because of Nostr’s architecture which I agree with.
It’s about isolating it so that bad content doesn’t poison the well.
Thanks for making time to take a read.
I agree we are still pretty early and I don’t think this proposal would even really come to fruition for years.
I’m just trying to start the conversation because we will have to (at some point) reckon with the people that will use Nostr for truly nefarious purposes.
Please hear one thing if nothing else. This strategy only works if a vast majority opts in. There is nothing about this proposal that is about forcing content moderation on anyone. It’s just helping users and relays to curate their own feed as you’ve stated is your hope.
I agree education and users managing their own feed is the best path forward. I just want to give people more tools to do it with in the future.
I agree. People should be their own algorithm. And this proposal isn’t about shoving blocklists on users. They’d start using them only if they want to. That would help normal people be able to manage their algorithm with less time.
This is about allowing a spectrum to exist for users on Nostr. Not everyone has the time or energy to carefully curate their feed and build their own algorithm. Some will want to have help which is the idea of having blocklists that people can opt into.
But on the topic of how this won’t actually ban bad actors or unsavoury content. Agreed.
The usage of blocklists will help to isolate bad actors onto a smaller set of relays so they’re not poisoning the well for users that aren’t sharing and consuming that content.
There’s no way to ban users or content from Nostr that’s the whole point. But this could be a way to keep censorship resistance but still isolate the truly bad actors so that all Nostr usage doesn’t get lumped together with terrorists and human traffickers.
Absolutely. That’s what the proposal is all about.
There would be no content moderation forced on anyone. Each user can choose the blocklists they want to utilise including none at all or one they manage themselves.
It’s just a way to allow the community to create an ecosystem of blocklists based on various tags that commonly people want filtered from their feed.
Blocklist providers would be service providers to users, clients and relays so that they don’t have to do duplicate effort in identifying content that will get their relay taken down and/or users to complain about their feed.
If a relay you use uses a blocklist you disagree with then use a relay that uses a different set of blocklists. Or relays that use none at all.
Yessir I’m very on board with that.
I think since this strategy is entirely consent based (every level of Nostr user chooses what blocklists they want to use) most relay providers will be conservative as you mention.
But I’m open to any ideas on making that as explicit as possible.
Thanks for reading and giving feedback!
Agreed that why the proposal is about allowing users determine what content they want filtered out. Without every users having to do the work of identifying all the sources of content they don’t want to see.
I imagine you might want to filter out child porn from your feed for example.