What say you, #Nostr? Do you like the idea of your reports in #Amethyst very likely showing up as "likes" in other clients?
I have a question for #amethyst users. I'm not asking this to dish on nostr:nprofile1qyghwumn8ghj7mn0wd68ytnhd9hx2tcppemhxue69uhkummn9ekx7mp0qythwumn8ghj7anfw3hhytnwdaehgu339e3k7mf0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qpqgcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqss2dqr , but only because his users are my users, and I care about my users' (for lack of a better word) "safety" on nostr.
Currently, when you report something, Amethyst does two things:
- Publishes a kind 1984 report event
- Reacts on your behalf with a ⚠️ kind 7 reaction
TLDR; do you find the emoji reaction to be a problem? Full background below.
I've always been skeptical of public reports, because regardless of intent, they publicly and permanently associate your public key with objectionable content. This may be as harmless as reporting spam, which is fine to do publicly, or as sensitive as reporting directed abuse (sharing additional information about your associations), or reporting CSAM (which is a legal gray area in some jurisdictions, since it may constitute "advertising" the content).
I personally use nostr:nprofile1qyfhwumn8ghj7ur4wfcxcetsv9njuetn9uqsuamnwvaz7tmwdaejumr0dshsz8nhwden5te0dak8jmtsd93hxv3sxg6zumn0wvh8xmmrd9skctcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsz9nhwden5te0v4jx2m3wdehhxarj9ekxzmny9uqzqrezcph2cyqzdp80e35026z5p6p595tqn4gghn2rztqr3esef79kpu7u7y 's nostr:nprofile1qyvhwumn8ghj7un9d3shjtnndehhyapwwdhkx6tpdshszymhwden5te0wp6hyurvv4cxzeewv4ej7qg4waehxw309aex2mrp0yhxgctdw4eju6t09uq3qamnwvaz7tm99ehx7uewd3hkctcpzamhxue69uhhyetvv9ujumn0wvh8xmmrd9skctcqyptdfv7kxy86mdeffdlsgx4tg6w9llyfjxcmrve3nqdedgjx76hx2a33ch8 to anonymously and privately process reports in Coracle, because I want to protect my users as much as possible. But I'll admit that use of kind 1984 is nuanced and open to debate.
Much worse than using kind 1984 though, which semantically fits the concept of "reporting", is using reactions to signal reports. First of all, this doesn't really add any new information that kind 1984 doesn't already contain. It also has the effect of generating content on behalf of a user that they may not know they're consenting to.
In many clients (formerly including Coracle), "likes" are not filtered down by emoji, and so these kind 7 "reports" end up showing up as "likes". Completely fixing this problem is impossible, because it requires mapping a high-fidelity subjective medium (emojis) to a low-fidelity objective medium (up/down vote) in order to show likes. This can only be done with a reasonable degree of reliability for a very few emojis. This creates a problem for like-based clients in that lots of reactions can't be included in like tallies, resulting in lower social signal.
At any rate, I implemented the partial fix of whitelisting "obviously positive" emojis when calculating "likes" a long time ago, because reactions can be negative. I however didn't apply this to the "likes" tab on user profile pages, which was brought to my attention earlier this year when an Amethyst user asked me why a bunch of CSAM was showing up under his "likes". He wasn't aware that "reporting" in Amethyst created a public record of his consumption (unintentional or otherwise) of illegal porn.
This problem has since been fixed in Coracle, but likely still occurs in other clients that haven't yet addressed this problem, "trending" algorithms, and coracle custom feeds based on retrieving kind 7 (since kind 7 sentiment can't be filtered against on the relay side).
This is a Really Bad Thing, because it results clients advertising content as connected with the person who had intended to dissociate themselves with it. While clients processing reactions can mitigate this, the root issue is that a field for user-generated content is being overloaded for use in an application-specific context.
So, that's my opinion. What do you think? Do you find it surprising that reports in Amethyst may be treated as "likes" in other clients? Is it Amethyst's fault for creating the reactions, or other clients' fault for not filtering them out?
For more discussion, see the thread on github: https://github.com/nostrability/nostrability/issues/88
Discussion
That's the wrong phrasing. Reactions are not likes. 🤪 is not a like. Clients should not call them likes. That's the other client's problem.
While I agree other clients should be more careful with terms and labels, it's high time #Amethyst drop kind 7 reports. I, for one, don't want my npub associated with any sort of emojis on content I choose to report.
What is the purpose of using a kind 7 reaction paired with the kind 1984 report at all? Why not just do the kind 1984 and leave NO reaction from the npub on what they believe is objectionable content at all?
To help clients that don't yet implement reports, to help content indexers figure out how good a post is and to help clients that do not want to implement reports because they see as too harsh and/or form of censorship. The reaction tagging is a softer way of doing it.
I don't find these reasons particularly compelling.
"To help clients that don't yet implement reports..."
In what way would this help those clients? They should probably just implement reports rather than just having a warning sign emoji show up as a reaction to a post. And that is assuming they have that array of reaction types available. As mentioned by nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn, if they haven't implemented reports, they may also only have one reaction type visible in the client, and users of #Amethyst just have to hope they don't display all kind 7 reactions as a positive reaction on their client.
"...to help content indexers figure out how good a post is..."
While negative reactions, when they are unambiguously negative, can help content indexing, so could reports, and with less ambiguity. I can think of situations when I might react with what could be considered a negative reaction, but not because I don't value the post at all, or wish it hadn't shown up in my feed. For instance, someone might post about how there are parents intentionally mutilating their children because they "feel" like they are a different gender, and the OP is clearly against this. I would probably react with a puke emoji 🤮 IN AGREEMENT with the OP, even though that is a clearly negative reaction. Likewise, if someone posts something alarming about some garbage that is being put into processed food that I had no idea was the case, I might react with ⚠️. That should not reflect on how "good" the post is, as it might be a very good post about the dangers of processed garbage in our food. However, reports should definitely have bearing on how good a post is, and lack of reactions can speak as loudly as reactions.
"...to help clients that do not want to implement reports because they see as too harsh and/or form of censorship."
I am unsure how this helps such clients either. If they don't want to implement reports, then reports just won't show up in their client. Reactions may show up, if they have the ability to display a wide variety of reactions, but anyone using the client won't necessarily know that a ⚠️ reaction means the note was reported on another client, unless the client displays that specific type of reaction as a report, in which case they should just implement kind 1984.
It is my understanding that how "soft" or "hard" a report function is can be determined by each client, right? Each client can choose what a report results in for their client. Maybe one client will blur any images on notes that have been reported by people you follow. Another will do so with anyone in your entire web-of-trust. A more harsh client may choose to hide all posts that have been reported by your web-of-trust by default, and only show them if you turn this feature off in your settings. But ultimately, each client decides how a report affects their users' feeds, so it can be as "soft" as any client prefers it to be, including doing nothing with the information at all, or simply displaying something saying "5 people you follow have reported this note," but showing the note to you anyway.
Ok, if that is not enough. Let me give you more (yes, there is more).
If you are in a client that doesn't support reports and you make post people don't like and they start reporting on Amethyst, your post will disappear from everywhere and you won't even know.
If we send the warning reaction, you will know that some people are reporting you and your post and in some cases whole profile is now tainted.
I always side with letting people know, even when their client doesn't want them to know.
But you're making that reaction without letting the person performing the report know...
That's an interesting point, and I can definitely see value in being alerted that my posts are being reported, even when using clients that don't support reports, but if my client doesn't support reports, it probably also doesn't let me know that a ⚠️ reaction means my note was reported.
I also think the bigger issue is automatically creating reaction events when the user isn't aware anything more than a report is being created.
Having diverse clients is good because it supports user choice. Implementing certain features in such a way that it hijacks other clients' feature set to support your opinions about how they should be built seems antithetical to user choice and an open protocol.
So, you forcing me how to operate is fine. But when I do it I am hijacking stuff? Come on man...
Plus, I am not hijacking anything. Your client still works because you follow the spec. Other clients don't work because they decided to do not follow it. Which is their decision. I am not here claiming that they are hijacking anything. It's their choice and until now I never asked them to fix, even though it has been breaking Amethyst for over a year. In the end it's their choice. They are free to do it in any way they want it.
How am I forcing anything? I'm asking you to make a change I think serves your (and my) users.
But your client is not affected. How is this affecting Coracle users?
Because my users also use Amethyst, and the many clients and algorithms that consume reactions. Because when people try nostr and see these sorts of interoperability problems, regardless of whose "fault" it is, they have a bad experience.
I've only argued this for two days because you seem unwilling to understand my point. I thought this would be a simpler conversation, because you're a reasonable person. You don't have to do anything you don't want to, but the feature I'm advocating you remove seems to me to be both bad and unpopular. Feel free to disagree, but we should be trying to row together here, not break the user experience in clients we don't agree with.
Can you please give me an example on how Coracle users are affected? I need something that is real, that is breaking right now and that is not because the other client didn't implement or just ignored what the spec says.
What I don't like in this whole debate is people advocating for other folks. Amethyst has been doing this for 1.8 years at the scale of 200,000+ installs and NONE, literally NONE, of our users ever mentioned this as a problem, even though most of the reports in Nostr come from us. I am not kidding. This is a really bizarre discussion. Even after you and others tried to gather Amethyst users to be angry about this, only a few people engaged. So, forgive me if I don't want to make changes based on hypotheticals. I can't just base my decisions on 5 of 200,000+ users. Otherwise, I would just go crazy with the amount of stuff that I would need to do.
Maybe people didn't mention it as a problem because they didn't know it was happening. It took a person seeing CSAM under their likes for me to hear about it. Most people who responded to my note are surprised about it.
I'm not trying to turn the mob against you, I just care about users not seeing child porn. I even contacted a lawyer from EFF about this late last year, because it's extremely concerning to me, for nostr in general. This is why I use Tagr. I've only spilled so many words on it because you didn't seem to understand my point, not because it's some moral crusade for me. I'm disappointed it had to turn into a flame war. That was not my intention. Internet 1, hodlbod 0 I guess
Did they see the CSAM in the likes in Coracle or in another client?
See.. I take the fact that they didn't know as working well. These reactions are out there. And reports are the things that gather the most amount of heated debate between users that are reporting each other. Reports generally escalate rather quickly. So, if this was a real problem, this thing would not only be known but it would be all over Nostr. It would be our anti-spam filters debacle all over again.
If my actions are leading to other clients breaking and it's my fault, I am happy to fix it. But otherwise, there should be space to play around with different implementations for things and this is one of them. This type of difference is what makes different clients different.
I think what we do is postive to the network. Other clients might disagree and not send the reaction. Receiving clients can filter it out or use it in any way they see fit. And users can pick the client with the approach they like (or that they don't bother about). It's all good.
"Amethyst has been doing this for 1.8 years at the scale of 200,000+ installs and NONE, literally NONE, of our users ever mentioned this as a problem"
Sure looks like someone was raising the alarm 5 months ago. I just searched "kind 7" and found this:
I use the word hijacking because of what you said. "I always side with letting people know, even when their client doesn't want them to know." implies you're bypassing the design decisions of other clients on purpose.
You can’t help but bypass the arbitrary decisions of client developers when their decisions conflict with yours. Otherwise an aristocracy is created.
Yes, in that case I am making the decision for the other client and I think that is the correct decision. As one of the few clients that allows people to report and uses that report information activelly in our user base, the least I can do is to let users that are being reported know that it is happening. Even if their client doesn't want them to know.
By signing a kind 7 that many of your users are unaware is being signed on their behalf?
We also add the note author to their multiple mute lists on the same report button. We also update outdated relays on the fly, we also authenticate in relays without asking the user. The app does A LOT of things that we don't yet have individual little interfaces and explainers for. These combination of actions is what defines Amethyst's user experince from others. It has always been like that. And the code is open. It's not like we are trying to hide what each button does. We package everything into a few simple actions that do a lot, so the interface is not polluted with all the nitty gritty of Nostr.
Also, Amber users get 3 approvals everytime they report. So, people can always reject what the app is trying to do if they want to without us having to code anything.
No one is arguing that #Amethyst needs to walk the user through every little combo action happening under the hood, but when a non-obvious action could carry legal jeapordy, then it should be at least disclosed if not presented with an opt out.
I do also want to take objection with the statement that a users posts that get reported "will disappear from everywhere" without them knowing why. This simply isn't true.
Per NIP-56: A report is a kind 1984 event that signals to users and relays that some referenced content is objectionable. The definition of objectionable is obviously subjective and **all agents on the network (users, apps, relays, etc.) may consume and take action on them as they see fit.**
Some relays may eject all notes that receive a report, others won't. Some clients may choose to hide all notes that receive reports, and others will only display a warning, or nothing at all. Some will only take action if the content came from someone you follow, or who is within your web-of-trust, and others may simply set a threshold of needing to receive a certain number of reports on a particular post. Still others may only hide the note for the person who made the report.
The point is,it doesn't just disappear everywhere. It depends entirely on what relays and clients choose to do with reports, if they choose to do anything at all.
A warning sign isn't a heart. My event was the posting of a warning sign, for the benefit of others. Why did they change the content and intent of that event without asking me? I demand that these other clients change my content back to its original form before my personal and professional reputation is damaged.
Just spinning it to make a point.
That's fair, for sure. But if we are aware there are clients that are not differentiating kind 7 reactions as they should, we shouldn't be associating reactions with something as possibly incriminating as a report function. #Amethyst devs have no control over what other clients do with kind 7 reports, but they DO have control over whether Amethyst uses kind 7 for reports at all.
If we have a separate kind for reports, and we do, then I think it is much better to keep them contained to that kind, and not require other clients to figure out whether a kind 7 reaction is intended to be a report, or should just be treated as an emoji reaction.
Yeah, I see both sides of it, but it's such a small thing to me. The only time it happens is if someone specifically reports a single note as spam or whatever. It would be better to just report/mute the whole npub and be done with it. The client devs can do what they want and I'll roll with it.
Another spin. You posted the warning sign in agreement to the op as the content of their post was a warning (as some do on Facebook) and the clients posting as hearts used an icon that represents all reactions to be more friendly with users coming from Twitter.
Another thought maybe was there a simple like nip before the react nip? Maybe clients wanted to easily combine both nips together in the way Amethyst combines reports/reacts.