Almost 40 users calculated their entire personalized webs of trust from scratch this afternoon, including Follows Network, GrapeRank, and PageRank scores for over 180 thousand pubkeys, and my server is still alive. 😄
Discussion
It sounds like u've optimized the system well to handle such a large amount of data and still keep everything running smoothly. Congratulations on the successful execution! 😄
Although looks like cpu utilization did hit 100% briefly … still have some optimization to do
I tried it today. The hopstr part is really neat. I found myself wishing the random npub was actually 3 that I could choose which explore.
This has come a long way very quickly.
I’m glad you’re enjoying it!
I’m not sure I understand — you’re wanting to select a random npub 3 hops away from you? Doesn’t it already allow you to do that?
I'm sorry... I meant it would be neat if it brought up 3 random npubs instead of 1, so it felt like a "pick a door" type of exploration.
Ohhh that’s a nice idea! Right now you can click the button again to get a new random profile, but you can’t see more than one at a time. Should I add that feature? Maybe a selector at the top for how many random profiles you get with each roll of the dice?
I clicked through a few times at different hops before I thought, "wow this would be a lot of fun if it gave a few random suggestions."
I like that idea. Maybe suggest a few as a default & have the option to change that up or down within a reasonable margin.
my paid job right now is about devising a series of criteria and parameters to make guesses about good matches between two gamers
Do you use graph databases, knowledge graphs, recommendation algorithms?
no, but i am working with a key/value store that is used to build the dgraph database, and it is extremely simple and flexible to work with, i'm glad i have persisted and tried to get a good understanding of how to use it, it's amazing
also my current paid gig i'm *building* a recommendation algorithm
i will use badger if i can find a replication library for it so it can be used with a number of nodes to produce a replicated store
not up to that point with the work yet, right now just defining the match criteria and creating weight estimates before i start throwing it at a heap of random data
Ok - that feature is going on my to do list 🛠️