That’s a good question. I have two answers.
First, because I saw a lot of misinformation on Nostr. Misinformation isn’t the same for everyone, and that’s fine, diversity of opinion is incredibly valuable. But sometimes it’s not just about opinions; it’s about demonstrably false claims designed to deceive and manipulate.
Debunking false theories with evidence and sourced responses takes 20 times longer than spreading misinformation. An AI-powered tool was an imperfect but useful solution I wanted to explore.
I built this tool to give anyone an easy way to get more context about the information they encounter.
Second, I’m a computer science student, and I think Nostr and Bitcoin are among the most interesting protocols today. I wanted to experiment by building something on top of them.
I won’t stop working on it. For one, people are using it, so it’s useful (and occasionally entertaining). But also, I refuse, as a matter of principle, to give in to the harassment and violent censorship attempts from certain individuals.
Most of this note disgusts me. If you think it's ok to build tools to efficiently eviscerate people, then you're a sadist. You might not know it, but that's how it seems to me.
But sure. You do you.
How could the bot possibly eviscerate anyone ?
It's just a tool.
If you want to use you can.
If you don't want to use it you can just ignore it.
IIf it really bothers you, you can mute it.
That's how Nostr works: you choose the content you want to see.
Fact-checking is not sadism, wanting to prevent it is.
Thread collapsed
Thread collapsed
Thread collapsed