A good reminder of why “We need algorithms on Nostr” is a false and presumptuous statement. Some want algorithms. What we need, though, is user control.

#[0]

Reply to this note

Please Login to reply.

Discussion

Algorithms users can control?

This is what has been mentioned a few times by developers. I feel this is the best path forward.

My vote is to keep it as is....

We are in full control of what we see here.

I'd love an explanation as to why we need a grp of people that can code deciding what we non coders see.

It's not so much about deciding what we see, it's about helping us find content that we missed. If you posted something while I wasn't using Nostr, I'll never see it unless it's good enough content that is shared consistently throughout the day. It's a problem. Algorithms would help bring certain content to the top of a feed. I'd like an untouched as is feed and an algorithm of my choosing feed to help surface content that I haven't seen yet, but was shared by a lot of people I follow earlier in the day.

Absolutely, I do not think the idea of algo is the problem. Algos working in ways unknown to us and that we cannot turn off or control are the problem.

I am sure we will be surprised by how amazing an algo designed by a person with good intentions could be.

Something like a highlights of the day feed. Based on some combination of how many likes, comments, zaps, and web of trust score. It should be user configureable so the user can toggle those criteria and any other criteria. You could also adjust the timeframe so you could have highlights of the day, week, month, all-time hall of fame.

Primal is doing some aspect of this with the trending and most zapped column which is very cool.

The following+ tab on nostrgram is pretty cool but doesn't really help with the time issue.

The trust score is a concept that fascinates me. Has anyone cracked a model that can generate and analyse quantitative data and provide good results? I’d love to apply to the field of education. Decentralized education is my obsession.

Gonna think out loud for a few paragraphs, do not mind me. Stuff comes out differently when typing.

Trust Score or more precisely Trustworthiness Score

When I think about decentralized education , one of the key component is the need for a built in academic accreditation system. There is already good research on how to assess the quality and effectiveness of teaching, relying on multiple streams of information that can then be made quantitative, compiled and turned into a quite effective score. The main problem is the they rely on other people’s opinion for the majority of these streams. How can I know who should be trusted when providing feedback and who should not? It’s tough in the real world, even more so online. We live in a world full of bots we cannot let quantity do the work hoping that the outliners will be identified and removed.

Trustworthiness needs to be tracked and quantified. But how? We could punish bad behavior. But, that’s not really how trust works, that’s how distrust works. Trust is a link built and maintained over time. The amounts of links a person has matters, but the quality (in terms of trust) of these links matters far more. How could these be tracked and quantified in the context of a decentralized education protocol (and other contexts)?

Gonna think out loud for a few paragraphs, do not mind me. Stuff comes out differently when typing.

Trust Score or more precisely Trustworthiness Score

When I think about decentralized education , one of the key component is the need for a built in academic accreditation system. There is already good research on how to assess the quality and effectiveness of teaching, relying on multiple streams of information that can then be made quantitative, compiled and turned into a quite effective score. The main problem is the they rely on other people’s opinion for the majority of these streams. How can I know who should be trusted when providing feedback and who should not? It’s tough in the real world, even more so online. We live in a world full of bots we cannot let quantity do the work hoping that the outliners will be identified and removed.

Trustworthiness needs to be tracked and quantified. But how? We could punish bad behavior. But, that’s not really how trust works, that’s how distrust works. Trust is a link built and maintained over time. The amounts of links a person has matters, but the quality (in terms of trust) of these links matters far more. How could these be tracked and quantified in the context of a decentralized education protocol (and other contexts)?

I love the network view in Primal. I trust the people I follow to follow good stuff, but can’t always count on their reposts.

Oh wow I just played around with Primal some more right now and realized that it is already a lot closer to what I described using the explore button. It is quite good. The only thing I don't really see which would be cool is the ability to adjust the time-frame of trending

I like having no algorithm. I also see value in being able to curate your own feed how you like. As long as the user is in control of their own feed, and can adjust or turn on/off any filtering as they see fit.

An algo can be as simple as the user setting a filter for likes amount, repost amounts, zap amount, comments in last x minutes…

For me it gets interesting once I can also get a sense of what the plebs I am following are following without having to count on their reposts.

For me it goes beyond the twitter like experience on #nostr. I foresee publishing of all kind using the protocol. So, my twitter like client is far less in need of an algo than let’s say the client I would use to browse for music, video content or long form written content.

Absolutely, ideally configurable in the client. I’d love to see an open market for algos, to put a bit of competition amongst the people interested in developing them.

This is the way forward, I reckon.

I’ve even seen suggestions of creating an algorithm “marketplace” so to speak where users can create and share algorithms. The important part is the user needs to be in control of the curation and have the ability to turn it off.

Yes. I’d love this, as long as I decide what algo I use, or even better can configure my own algo from prebuilt packages.

User defined algorithms would be quite useful.