Yes. Any time you pass control of your feed to anyone or anything else, it also becomes the author of what you see and know.

It’s easy to say, “well I can turn it off.”

But how will you know to turn it off? How will you know what it is filtering? You cannot know what you cannot know. Especially with AI, which operates as a black box, and even the engineers who design it do not know how it works.

Stronger tools for users to filter their own feed are needed. It ought to be standard practice for users to be able to action posts in their feed by keyword, hashtag, and NIP-05 domain.

NIP-05 providers likewise need a control panel where they can see what people they are providing “verification” to are posting. If I’m providing NIP-05 and someone is spamming or posting porn or beheading videos, I should be able to invalidate their NIP-05.

PoW also needs to have further adoption, and relays need better moderation tools. The fact that there are not easy web ui’s to manage relays is not good. Individual relays should be able to tailor the content they accept. Censorship resistance is maintained by how easy and cheap it is to run relays, and the low friction for users to add new ones.

These features need to be easy for operators. We want it easy so even small ops can perform the task.

Anyways, just my 2 sats, but between NIP-05 services, relays, and keyword filtering, if the tools were built there is no reason why someone could not enjoy nostr and not see objectionable content fairly easily, and still have the freedom to see it if they wanted to.

In a very real way, users as individuals would retain all the power to control how much they want the training wheels on or off, by being able to share swap lists of relays, NIP-05 providers, and keywords to filter, similar to crowd sourced ad lists, or to turn it off all together and get the firehose.

Reply to this note

Please Login to reply.

Discussion

No replies yet.