Good morning! Just had a random thought: same tools and techniques that are used to block spam, are used to censor and block spread of information. In the end it’s not the tools that kill, it’s the people who use them 🐶🐾🫡☕️☕️☕️🤔🤔🤔

Reply to this note

Please Login to reply.

Discussion

True, this is why I think we should have censor tools that the user invokes or disables on their own, this way people can see things they want to see and block stuff they don't. When we try to autoblock stuff for everyone, there's a problem where some people want to see it, when we don't block anything people don't want to stay, I guess it kinda relates to the saying "One man's trash is another man's treasure" I don't know, just a thought. GM! 🐶

GM! No real solution exists unfortunately 🐶🐾🫡☕️

Yeah, but it should. This should be something to work on. If I knew now myself, I would.

I am saying that there is no theoretical solution, so go figure 🐶🐾🫡

You could make a central relay that pulls from a bunch of other relays and filters stuff out before sending it to you, this would raise huge censorship and centralization concerns I'm sure though.

And user would choose what to filter/not filter

sometimes (most of the time) user choices cant keep up with the onslaught of spam or illegal activity. we scale by trusting in our peers and each doing what we can

Yeah, Maybe the answer is adding it into clients directly more, you could also ask users what they are interested in seeing and not when they are signing up, and could change later too. I personally wish there was a tool to block all spam in my feed but no such tool exists.

I guess the answer is to give people control of the tools rather than corporations.

Who do you think runs corporations? 🐶🐾😂

Obviously, people run corporations. I meant that users on various platforms should be the ones making the decisions on what they want to block or filter. Companies create the tools, but users decide how and when that they want to use them.

Ok, let me be a little more clear. The tool is a black box for a user, unless they examine the content of each blocked thingy. Whoever controls the box, controls what the user can see. 🐶🐾🫡

I get you now…There are no perfect solutions. People can say whatever, but I as a user, may not want to see certain types of content on my global feed. There needs to be a way for me to curate my own online experience without blocking each user individually. Hopefully, the standard will be for companies to be transparent about how their algorithms or tools work.

In some cases yeah, but note that with BTC/LN/etc. we can now create systems that objectively disincentivize spam instead of outright creating tools giving centralized power to admins to decide what to filter. It won't ever be perfect, but a hell of a lot better/fairer through better tech.

Same with AI