It took a bit for me to untangle what was off in the video for me.

1) It's almost a "not even wrong" problem, where one can agree on their stance of not supporting the AI industry as a driver for loss in jobs, slop, etc. etc. But they're averse to it like you'd have aversion to a color, and "The AI" is one color.

At that point you're basically arguing for manual content review. Pushed further, you'd almost expect them to start freaking out that the abacus was ever invented.

2) They're also implicitly coming from a mindset where the platform should protect their users. While at the same time, the platform's revenue model is fundamentally antithetical to their wellbeing, so they have to continually tread that dissonance between feeling like opinions matter to their farmers.

Because everything is so open on nostr, we can only expect that everything written/done can be analyzed at any level of detail. We're not asking "platforms" to protect us in any way. We WANT our data/speech to propagate as a fundamental. After that's established we have other rules to tweak the level of openness and visibility.

Reply to this note

Please Login to reply.

Discussion

Not to even say it was a bad video or illogical, if you think AI=automation and you expect platforms should protect you while also farming your use of their product.

It's good to know how the other side thinks. When you're in an echo chamber like Nostr you sometimes forget the are other viewpoints out there.

Completely. Writing this helped me understand more of Corey Doctorow's perspective - solid diagnosis of problems, but the way forward is completely different if you expect platforms to protect you.

I don’t really understand what you were arguing here. Read the post like 3 times lol

They were talking about how "no ai content" seemed good, but didn't like how it would be used in content analysis, creating the app itself, and how OtherStuff had AI everywhere else in it. They basically confused & mixed up AI with automation and said they didn't want it anywhere.

The second point is about where they're coming from. They think platforms care about them, and they have a say in what they do.

They said "I don't want AI being trained from my content", expecting platforms to cater to and protect their values. Yet the platforms extract as much value from them as possible, and that includes AI training.

Nostr data is about as accessible as connecting to the internet, there's no protectors to negotiate with. You're not negotiating with anyone about what you want to say, but you also can't tell anyone what not to do with what you say.

Yea I agree with that. I think she missed the plot.