Could be an AI glitch

Reply to this note

Please Login to reply.

Discussion

It most likely is. But that’s not an excuse for them. They have some algorithm that is checking the attached images. And it has a false positive rate. And presumably the algorithm is being tuned to be risk averse, So for every 10k images some small percentage gets flagged incorrectly. It’s also the case that it has a false negative rate presumably smaller than the false positive rate, so they are still missing some and people will flag them. So as a matter of regular operation they are randomly annoying their users and creating spin off problems, because people will be able to find explicit material that wasn’t flagged and compare it to material that was flagged inappropriately. And that’s before anyone tries to use flagging adversarially to censor opinions they don’t like. Moderating a crowd appropriate for advertisers and maintaining “free speech” is pretty much impossible. And it’s why Twitter never really was nor should have been considered as a “public commons”. Elon has to own the quality of the final product. He claimed free speech absolutism. He is falling short of that claim. Not saying I could do any better but he tried to claim the Twitter moderation problems were trivial to solve, I don’t mind enjoying the show. Self curation and tools that empower users to control what they see is the only path, and free speech likely doesn’t have a market. Nostr fixes this.