They don't have any stakes in it; I literally see them playing with report buttons for memes and find those events on relays. Some of them even use the report button to flag posts (like hard comment), not with the intention to censor that person for their entire follower base. You are assuming that without any context.

Yeah there may be some cases but not all. You will find so many examples of abuse or misunderstanding around this feature in this comment section. I was just talking to a guy here few hours ago.

Reply to this note

Please Login to reply.

Discussion

Reports are content flags. Clients can do whatever they want with them. I think it's better to address this on the reporting screen of the sender. Otherwise, you will be run into issues with every client.

Example: Here is tanel flagging one of my post, it was a memes. He is one of my closest friend here. Do you really believe he wants me to be censored by his entire follower base. You are just assuming it.

For this feature to work we have to teach every single client user how to use report & when. Not even mentioning basis & intentional abuse.

Correct. We have to educate users. That's why that appears in the profile. You can reply to the report and the reporter will receive a notification.

I know this is complicated, but you have a proven track record. I hope you will also find the best solution for this. Good luck, man! 🍀

Thanks for the SATs! I do hope Tony's or somebody else's fork also bring some ideas to the table. I also hope somebody makes an app just to monitor reports, educate and call out reporting abusers. There is lots of work to do.

We will find a way 🤜🤛

What Iefan says is so true.

Report seems like it applies to only a particular note that you feel others may not want to see/ avoid - not the entire feed of a user.

Even on Amethyst, when I report a post, it's doesn't explicitly mention I'm reporting the user & not just the note.

It's always the note. Unless you are in the user's profile page and click the 3 dots in the corner of the page itself.

But in that case, why would there be people who are shadow-blocked due to reports, as per this thread?

I doubt many would specifically go to the profile & click Report 🤷

Or is it the case that if a user has reports on their notes from 5 people you follow, all their notes get flagged?

Everyone knows this. I'm a Damus user too. Most reports on Damus are for 'Bad Content,' (post) and it's easily verifiable.

But Vitor insists on using 'reports' as the main basis for censorship, not to mention how cheap they are.

It will just suffocate Amethyst users. Just take a quick scroll, it's already happening, you'll see numerous examples right here. I was just talking to one . 🤞

At this point he is too stubborn to listen. Just fork off and siphon users away if you care that much, maybe then he will change his mind.

What do you mean by "Bad Content"? That's not even a NIP-56 instruction: https://github.com/nostr-protocol/nips/blob/master/56.md

Why not just fix the expectation on the Damus side?

“Some report tags only make sense for profile reports, such as impersonation”

“Some”

Hence nearly every report tag is referring to “bad content”, primarily per post.

Can you give an example event id? I cannot find anything with "bad content" tag on Damus's relay. I only see spam, illegal, and explicit

Those are subcategories of “bad content”

Well, they are not writing in the event. So, all everybody else sees is literally "Spam", "Explicit" and others. I am not sure why.

Oh I see. Yeah I think the event id should be written if the user is reporting a note. That’s a “ui bug” I think

Sorry, not the event id. They are not writing that this is just "bad content". They are just writing full-on "Spam" and "Explicit" reports. They could create another category for "Bad Content". But they are not doing it.

What would a general “bad content” report offer over the more specific report tags?

I frankly don't know. But if they need something lighter than the other strong report tag categories (which is what people seem to be talking about here), they could create a new one.

It's not always bad content; sometimes they even flag dark memes as explicit, but that doesn't necessarily mean it's bad content.

Moreover, they often flag content for promoting shitcoins. nostr:npub1dyr5z60ddra8fsma8ynrt86pqp34cdlw5h87ecrya5pza5r00y4shzq6f3, for instance, has received approximately 58 reports simply for announcing the launch of a shitcoin.

Feel like personal content curation is in the domain of clients, ie not report events. UI could offer users a way to do this but it wouldn’t alone stop report event abuse.

Mute words would be good. Clever ML based client-side filtering would be awesome. The UX really only needs to be easier than submitting a report event. That said, even thought I don’t like shitcoins and I don’t want to see that crap, I guess some people will feel conviction in censoring stuff they think is a scam.

Think about impersonation accounts. There is a classic Damus impersonator that most Damus users reported and thus show with warnings on Amethyst. If you are being impersonated, you will find the report system very useful to let your followers know that is not you. And it's fully decentralized.

I agree. That’s why I’m differentiating between personal client-side filtering and report events

While I opted to just mute wolf, rather than report, others may go out of their way to report even if it takes 5x more UI interaction to do so, if they feel conviction in their assessment.

How can we reliably differentiate between real impersonation reports and censorship gaming?

The user can make that assessment. All we need to do is to let them know that there is a filed report against this person.

🫡👏