So it doesn’t say what should be DONE about the labeling. Either as a report or content warning. The relays and users through the client apps will still need to decide what to do about it.
For example, in Nos, we’re thinking of putting a click to reveal if there is a content warning that was added either by the original publisher or somebody who you follow or who they follow, the same way we scope our ‘discover’ tab. Eventually users could choose automatic settings, say, never hide everything reported spam or phishing, keep porn something you have to tap to reveal, and show everything else.
We’ll have some things which are constrained by the app stores, for example, if the user is a minor, we don’t show sexual content and don’t let the user choose. And we’ll keep a copy of the Child Sexual Abuse reports on some relay we run and need to do some checking on that, and block it for everybody. We don’t want to expose our users to legal jeopardy for having pedophilia on their devices.
Again, other relay operators and clients and users are free to make different choices. Host different kinds of content, see different kinds of content.
I think blurring or collapsing reported content and showing the report reason and text, is a very sensible approach. Then let the user decide whether they want to see it or not.
And something that occurred to me just now: make sure it’s possible to block reporters as well, to prevent report-spam.
Thread collapsed
Thread collapsed
Sounds like a right approach to me
Thread collapsed