The fact that no one really knows what exactly the stars mean is something that has always irked me as well.
It’s worth considering what rating systems we can dream up that are well defined and (hopefully) more useful because of it.
When it comes to contextual trust ratings — specifically, where trust refers to how much *weight* you would put in someone’s answer to a question in some given topic — at some point it occurred to me that these types of ratings only make sense if they’re defined *relative* to someone else. Example: suppose I rate how good I think people are at giving directions. I think the “average person’s” skill is a 1 (by definition) but Alice is really really good at directions. How good? Well if Alice says turn left and 4 random “average” people say turn right, I’ll follow Alice and go left. But if 5 random people say go right, then I figure that’s enough to persuade me that she’s wrong, and I go right. So that would place her rating at somewhere between 4 and 5.
This rating system is very abstract and most people may not think that way, but at least it is well defined. And if we start using algos that treat ratings as if that’s how they work, then perhaps people would intuitively start using them according to the way I just described. Well, maybe *some* people would. Those people, who are able to provide the most objectively useful ratings on various topics, would be the ones I’d be willing to reward with some sats for their opinions!