#note18z7rqv87qt0nu3v0rajf4ghnrqcnnc3ujgeqy3zswgs2nwh3nq3qmwhnlh

#nostr #nostrplebs #amethyst

#[0]

Hello Vitor.

I wonder what your reason is for the way you seem confident about your choices. Luckily this is not twitter so backlash is and will be an opposing force ;)

I wonder if you made the behavior of the report button so messed up for other users of amethyst, because you want network effects to play out as a content rating system? Are you so sure this is the right way for users to engage in content filtering? By not being able to engage in it, but being dependent on someone else to have the same opinion as you?

For me the answer is very very clear:

This is a bad bad way to utilize network effects, because it doesnt inform the network about the content, so the network cannot curate it and verify the decisions of the others.

Instead, focus rather on a simplified content boosting approach, instead of content filtering. It has a more accurate approach and better verifyability.

Hopefully you will see this coming along in your feed. Do not under estimate the use of network effects for information: its quite complicated to say the least!

Reply to this note

Please Login to reply.

Discussion

Actually, I think the reporting function is pretty good in Amethyst. It does everything to get people to Block instead of reporting, which I think is a good call by nostr:npub1max2lm5977tkj4zc28djq25g2muzmjgh2jqf83mq7vy539hfs7eqgec4et at the time.

> Are you so sure this is the right way for users to engage in content filtering? By not being able to engage in it, but being dependent on someone else to have the same opinion as you?

I am not sure which filter you are talking about, but if it is a Report it does allow people to engage with it (it shows the "Show Anyway" button). If it is simple dumb spam, then the idea is to be temporary until those duplicates disappear. It can be better for sure, but somebody needs to spend time developing it.

> Instead, focus rather on a simplified content boosting approach, instead of content filtering.

We have more tools to boost content than to block it. So we are focusing on content boosting. But filtering is a necessity. No one wants the dick pics or non-family-friendly content when setting up their kid's accounts.

The majority of these features were developed because users didn't feel comfortable onboarding their own friends in the network. That feeling is changing but it's taking us a while to give them the confidence (and tools) to onboard others.

I definitely get where you're coming from with this, and the intention is right.

But, Devil's advocate...aside from children, what if my friends and family want to see the dick pics you mention? I don't, but I can hit block. It's not up to me or anyone else to decide what others see or partake in. If they choose to delegate that to be, great! I see a real use case for the ability to subscribe to another's mute list in the future. But I personally don't feel the default should be to remove content without user intervention. Perhaps a part of onboarding in the future can include a way to subscribe to community moderation from the start? As long as users are aware they are entering a filtered and curated experience.

If you report, they can always see what you have marked. You are not deciding for them but adding information to their feed in the same way you do with likes, zaps, boosts or reports.

That's a solid approach, but I've still encountered individuals in my blocked list that I wouldn't have blocked and I never interacted with. I don't think that was the intention behind the filtering you set up, but humans are a wildcard.

It's tricky, there are a lot of accounts I follow and interact with that people close to me would never want to deal with. But they also may want to see things I block. I am mainly concerned with the end user being blind to what's happening, or individuals who may not realize their actions are getting them filtered ending up in a personal echo chamber.

Hello idk if my previous post came trough, but I understand what you want to achieve. But this should be handled in personal circles and on direct client filtering based on patterns instead of relying on other users (as in the future there wil be troll armies coming for sure.)

We will keep improving this for sure.