Aggregated trustworthiness is something I am super curious about. Could anyone point me towards info on how this could be tackled? I found some theoretical papers but not that much on implementations.
Discussion
There was a verified human initiative but they verified me as a cat… not sure if I can trust that.
🤣🤣🤣 they get to decide, not you
Yah but I could have been a long term scammer…
You could still very well be… 🤔 is this « look this is my face » act just part of the long con
Exactly. You never know, I could be the bad guy.
When there is in writing that we shouldn’t but at best learn?!
I’m afraid arguing she was dressed slutty to rape in the shower is weak excuse when not supposed to be in the shower, no?!
Guess the corrupt and stereotyping see that as a time to go rape and piling.
Building the traps.
Us non dev plebs are fucked… Assessing trustworthiness is a bitch. Offline, online. The only way I know how to deal with it is to pretend to be stupid and naive and see if ppl try to take advantage of it.
… problem is, I am often truly stupid and naive.
Anyone of prominence who cares about this should have their own domain one would think.
I do not disagree. We can make it easier for sure, but people need to learn to take domain names seriously 🤷♂️
I knew it!
More in the way of looking at interaction between users, diversity of network, frequency, duration of formed links (especially this last one). Easier said than done… would probably be easier to apply in a commercial context.
Trust is a link maintained over time. There must be ways of looking at this that could provide meaningful insight.
(I got attracted by the concept of decentralized academic accreditation. It would mostly rely on multiple streams of human feedback, assessing the trustworthiness of the human giving the feedback is key and rather problematic)
https://firstmonday.org/ojs/index.php/fm/article/view/3731/3132
This one was intriguing, on models for aggregated trustworthiness.