Because

1. It counts as engaging with the post which is not the intent ( I almost never give a negative reaction. If I don't like something I just ignore it )

2. There are clients that show the List of Likes/Reactions and these posts pop-up if Mute Lists are not used & pollute that page.

And like you yourself mentioned, what counts as a positive & negative reaction itself is a flexible contextual thing...eg.I use 😱 emoji, mostly as a faux-shock & never negatively.

So I am not sure it's useful to add reaction of any sort, especially if I can't undo it.

Would appreciate if you could reconsider.

Reply to this note

Please Login to reply.

Discussion

It doesn't count as encouraging. Reactions can be positive or negative. There are plenty of reactions ( 👎😡🤬😰😓👇⬇️) being used daily to demonstrate discontent with posts and people definitely don't want to see them in their profiles. Whoever is encouraging everything you react to needs to fix their client to filter the negative stuff out.

But that's the point no? In the maybe 1-2 contexts I used 🤬 emoji, it was sarcastic and not a negative reaction at all.

People use emojis differently and it varies with individuals... It's far more difficult for clients to judge and selectively filter, unless they customize to users the emojis they want to filter our (This doesn't even count custom emojis -- how to judge that?)

And Amethyst can atleast be transparent that an ⚠️ reaction is also being sent... Then people can take a call if they want that.

I had stopped reporting stuff being I don't want to react whatsover to csam or illegal stuff notes & I don't want to take that risk of being seen as engaging with those.

When you report, or you mute those notes or authors, you are engaging with them: you are creating Report Events or adding their keys or IDs to mute lists or block lists. It's the same thing as a reaction. Same thing happens on clients that mark event's as read, etc. Those are all interactions.

There is no way to only interact with positive content on Nostr. It's the wrong way to look at it. You are always interacting. That interaction can be positive or negative. But they will always be there... As they should.

Clients don't need to parse reactions into positive or negative if they don't want to. The only thing they absolutely CANNOT assume is that all reactions should be interpreted as the author supporting them.

We will make it more transparent that it's being sent. Right now only people using Amber can see the reaction event being signed together with the report and the mute list change.

It's absolutely not the same as a reaction. Reactions are contextual and can be used differently by people. I am not saying it's endorsing the content but it's definitely engaging.

A report is clearly a report / Mute/Block- there's no room for misinterpretation! and moreover, I can remove a Block/Mute if I so wish.

How can I remove a reaction that I was on my behalf that I was not aware of?

And thank you for considering to make it more transparent 👍That helps !

Reports are also contextual. There are plenty of people sending Nudity reports for things other people don't think it is Nudity at all.

Reports are absolutely the same thing as a bad reaction. They just use a different event numbers. But if you use them without interpreting what the user was trying to say, you are going to have a bad time in both cases.

You can just send a delete for a reaction. I have not tested it recently, but if you see the post again (might need to unblock the user) you should see your warning label there. You can click on it to delete it just like you do with any other reaction on Amethyst.

The main point here is privacy - what he is saying is correct in that some of us do not want to build public data graph.

Say I see "Nudity" content - I may want to report it to ensure that those that don't want "Nudity" content can interpret that and hide post. But I definitely don't want to have it public... people will be like - look at Rockstar spending his day browsing nudes on Nostr (would be a cool meme though).

The solution here is to delegate reports... like I wouldn't mind sending my Nudity reports to people I trust (like you Vitor / Amethyst), but I don't want to be involved further in acting on those reports. These should aggregated and then used depending on people's preferences.

Also nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z - one more thing that Uncle as a gentleman of the world wants to bring up here: reporting behavior is very culture / character dependent. It is very different in Asia (where I assume nostr:npub1qn49n06hdwwyrtvdyymu2wx57jvhz7anmu20tgsdjjyae3zhwaxsjtl6rj is from) vs Europe vs US, etc...

Agree, any action over reports or reactions must account for those cultural differences. They are like relay hints. Relays and other clients can use them in any way they see fit as long as they know this variance is there.

Can you explain the difference? I'm curious about the differences between cultures and would appreciate your insight.

Showing a woman's skin is ok in the west, but not ok in the Arab world.

I'm getting tired of women going around half naked. The only ones I like to look at are wearing an ankle length dress and waist length hair.

Oh I understand that. I was referring to the difference in attitudes towards reporting.

Well, in one case, people will report that as Nudity and relays of that culture will accept the report and delete the event. Same for followers in the same culture.

While in the west, the post will stay.

So, processing reports and reactions is always culture dependent. As long as everybody acknologdges that, it should be fine.

Reporting is free speech. Reporting a reporter for misreporting something is also free speech. And people can make decisions based on those records.

Thanks for the explanation!

It's more than that - in some cultures people will openly bring up problems while in others it is considered rude / egoistical.

I think discussing it from perspective of privacy consciousness rather than "how much female skin is showing" would make the discussion easier to follow and understand.

Make sense. I just don't see a private framework for reports. Because if you can take a post down while hiding behind privacy, then everybody will just take everything down.

To me there are two options: Users can either participate in the reporting system or not. If they do, they MUST be accountable to their reports. If they are reporting out of a whim, their society should be aware of what they are doing and react accordingly.

The participation of users is impacted by how public the system is. And there is obviously a range between forcing 100% public and completely private. By finding balance you would improve the system in absolute terms.

I understood at certain point of conversation that for you it is design decision... so we'll likely finish this conversation with "agree to disagree". But I see we're all better off and with more info on how to go forward... and I would like to contribute some code to this in near future. Also really glad nostr:nprofile1qqsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgspr3mhxue69uhksmmyd33x7epwvdhhyctrd3jjuar0dak8xtcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszxnhwden5te0wpuhyctdd9jzuenfv96x5ctx9e3k7mf0ss9zgs joined, learned some cool new things.

Awesome! Keep pushing. I love to see an alternative.

How does the bot associate the WoT of the sender to the report it files?

A report from you should be more valid to your followers than a report from a spammer that has not follower.

If that information is not in the Report, I am not sure what's the point of the system because anyone could just report anything they don't like and just hide behind the bot.

Meaning, people get to report and take posts down without being accountable to the public.

It doesn't, that's a trade-off. I've considered allowing users to pick public/private reports, because public reports of spam would be far more useful/less dangerous than public reports of e.g. csam or harassment.

Private by default with option to make it public seems is a great idea. I think it's all about providing visibility setting. In between - where you report it to trusted delegated party is what would be default for me.

On the privacy part, Blocks/Mutes are private on Amethyst. Reports are always public.

Both have the same effect for you as a user. But are intended to have a separate effect for others.

I think this is the main point... you've already implemented private / public reporting in a way that you see the most relevant.

And the way I understand your position is: reporting has to be 100% public, otherwise you don't consider it a valid report. So if someone wants to report "nudity" you would insist they do it publicly... there is no support for say reporting it privately or through intermediary.

In any case - this has been a great conversation and thanks to nostr:nprofile1qqsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgspr3mhxue69uhksmmyd33x7epwvdhhyctrd3jjuar0dak8xtcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszxnhwden5te0wpuhyctdd9jzuenfv96x5ctx9e3k7mf0ss9zgs for his input as well. I'll keep looking into how to properly categorize notes and then also give that technology to others, think it's very important for scaling Nostr in the future.

Reactions should not be filtered, that's not their purpose.

They don't need to. The UI simply CANNOT ASSUME they are all positive. That will NEVER make any sense.

They're not not binary so yes, the client cannot make assumptions nor to read them nor imply arbitrary meaning automatically and auto send their arbitrary meaning saying that's what the user wanted when it clearly wasn't. Reports should be performed privately between the user and the client and publicly only by the client, so your users are not associated with unwanted content directly.

Reports ARE associating the content with the user of the app. There is no privacy on it. Users are creating an event and putting the reported id on it.

Reports ARE NEVER PRIVATE. The name "report" should already tell you that.

Mutes are private on Amethyst, but other clients make those public as well.

They are all associating the user with the bad content. There is no way to do any of this without that association.

I know, I don't agree with them either... but at least that is intentional and purposeful, reactions aren't.

Intentionality cannot be assumed as well.

Lots of people have been replying, reposting and even zapping spammers and impersonators lately. This notion of "users should avoid associating themselves with bad content" doen't work because it is inevitable.

It's better if we normalize that association and then filter it down by what it actually meant at the time.

On top of that, Clients do a lot in the background. We are generally signing tons of events left and right to just operate. Yeah, we can ask permissions every single time, but that will get annoying really quickly. We already see that with Amber that people check the first times and then just approve everything the app needs to do to operate.