Reading through peercuration a bit and it seems interesting. Users have a web of peers (similar to Web of Trust), and they rely on those peers to help curate content. If their peers rate content highly, then that content will be more likely to appear on the users feed. The user is also responsible to rate content, so that their peers can be recommended content too.

If a user is constantly rating content highly which their peers disagree with (imagine a bot, or someone trying to spam), then those peers may decide to omit that user from their web of peers. In other words, a user is incentivized to honestly rate content, so as to maintain their network of peers.

Though, every implementation to curate content for users requires some sort of input from the user. In this case, peercuration requires the user to send their peers a “score” for content they consume. This score is averaged across all peers of a particular user, this way the user can get an idea of what their friends think.

But this is just unrealistic. We cannot expect a user to rate each post they come across, especially for posts they have no interest in. If they never click on the post, how is the post rated? At the very least, this scoring system should take into consideration all posts that appeared on the users feed which they did not click on.

For the posts that were clicked, the user might like the post, comment, bookmark it. In the case of a comment, AI should be used to understand the meaning of their comment. In the case of a like, does it really convey just how much the user liked the post? A binary input surely doesn’t capture this. A bookmark may be given a higher score than a like, but again these are more binary values than analog.

Maybe there’s a unique way of accepting input from a user, maybe: using a circular motion dragging your thumb across the screen, the number of revolutions coincides with just how much you like the post? Would produce haptic feedback for a satisfying experience, etc.

It seems that if we can define an input which is realistic (the user will actually use this input, and it is not too demanding) then the issue of content recommendation is effectively solved.

Reply to this note

Please Login to reply.

Discussion

Great feedback and I like the idea of tactile scoring (drag finger).

I agree that we cant expect a user to score all content the user come across. Id argue that this is a feature and not a bug. If the user don’t score some content but score other content, then the scored content is automatically more worthy of attention. One could think of the score not of as a bad/good but as a low/high attention value. SPAM would simply get zero score, because it doesn’t even have negative value… it’s just some sort of nothing-noise.

This would definitely work, but I think this would ultimately build the type of media we currently have. Posts that garner the most attention (clickbait, ragebait) would be curated to the top of the list, instead of posts the user finds value in/enjoys.

It’s hard to make such a system. If we were to use this attention-based feed, maybe we need “someone” who knows us. They know we’re republican, yet we spent 30 minutes reading and replying to a democratic blog post. Our comments were angry and without consideration.

If there were an AI model capable of capturing these details, then maybe it would show us a bit less of the content that makes us angry.

Though, this would lead to a very narrow perspective of the world as we’d only see republican content in this case. Really hard to solve this lol.

Hm… why do you believe that posts that user finds value in, would get lower attention compared to posts (clickbait, etc) that user finds less value in?

Me personally I would neither like, nor zap and probably click delete on clickbait-shit-posts… and my peers wouldnt get that spam from me.

Users/peers sending me that type of spam I would block or at least downgrade.

If I find users that behave the same, and we help curate the content, why would we get more clickbate/spam?