If it's successful, Nostr will be the most filtered social media platform ever and it is a good thing. It seems people think the removal of all the rules from social media is the goal, but as anyone who ever tried, like Elon and the Darknet Markets discovered, a game without rules is a shitty game. In fact the solution could be found in the opposite direction with an explosion of rules. But instead of rules specified in a centralized place, the rules would be best set on the edges, by the users themselves. Exactly how Bitcoin's IsStandard mempool acceptance works.
We are quick to take a dump on Jordan Peterson for speaking up against anons, and although I am not happy with his solutions, I find the points he was making about the problem rather trivial. In fact #[2] claimed the same in a talk titled: "Anonymity Is The Problem" which highlights that moderation of content/requests/API calls/etc... are the main problems us developers of anonymity systems are contending with. How ironic is that? We're building anonymity and because of that, the most amount of time we're spending designing protocols is putting ourselves to thought experience of how anons can game these systems. Anons are our adversaries :) Anonymity is a weapon everyone should have access to, but so to its defense against it. On a local, personal level, not in the form of a global governing body. Enter Nostr.
Since the nostr protocol is open, there'll be a lot of innovative ways implemented to filter out inevitable armies of bots and spam. Funny enough open source development tends to delegate difficult decisions to the end user. This is normally an antipattern, but with nostr the incentives are aligning quite interestingly: users will be able to tweak more and more variables of what kind of messages they want to and don't want to see. And guess what, you're the average of the people you interact with, so I foresee users are going to come up with strict rules for filtering content for themselves that goes way beyond the current standards of filtering out "misinformation" in a way that it finally be worth spending time on a social media (imagine that :) as the conversations there will make them better people, instead of stupider, angrier and and anxious.
Here's an example: ChatGPT can already do a decent job at assigning an "Intellectual Honesty Score" to tweets. Which means it's even possible today to separate the wheat from the chaff. And I would certainly make it so that only the highest quality content gets to mess with my consciousness, because currently it's pretty difficult to justify the time spent on social media.
What's the common between spam and intellectual dishonesty? I want none of those to creep onto my feed, nor into my life.
Interesting thoughts. Now add your LN address so we can zap you for them!
Thread collapsed
Interestingly enough my talk was actually called "Identity is the problem", so I was largely making an opposing point (or at the least, a different one) ... I think "solutions" to Sybil attacks based on identity are largely bound for failure.
I guess filtering in online discussions is another discussion (with overlap ofc). For that I'm mostly in agreement with what you say here. I just don't understand why for the whole microblogging/social media thing we don't have user-controlled algorithms for feeds. Fediverse/activity pub is in the same boat as nostr there, I see no reason why it couldn't happen other than it's fairly sophisticated engineering. I guess users mostly can't handle that cognitive load.
Personally I don't like blocking, I don't like tribalism and I'm a free speech extremist, at least people should understand that it's crucial as a principle to be upheld. So I'm mostly interested in how to block *spam*, and certainly not things like "intellectual dishonesty" ... whatever that is.
Thanks for pointing out the mistake.
On "Intellectual Dishonesty Score (IDS)" Of course you know what that is, you can often recognize it when you see it, it's just "Programmer You" has problems with imagining how the hell a computer can make such judgement. I intentionally brought this example to highlight AI innovations that even I thought previously was impossible. Anyhow, if it's easier to you, then you can just think of a robot recognizing logical fallacies. A "Logical Fallacy Score" is more algorithmizable than IDS.
Blocking and muting are similar in the sense that they're powers, users possess, however these actions target specific users and not the content itself.
You brought up a very good question of why we don't have user-controlled algorithms for feeds?
You theorized it might be the
(1) sophistication of engineering that was the bottleneck and I did alluded to that as well. If that's the case, I think #wokegpt et al. does bring the long awaited breakthrough. But on a second try, let me add two more theories here:
(2) It could also be that the platform has a great incentive to control the game to optimize the user feed for "engagement."
(3) It could also be that it has never been tried, since in context, social media is still a new thing and is still in its infancy, so such trivial ideas might have never been attempted before.
Thread collapsed
Thread collapsed