Avatar
k00b
05933d8782d155d10cf8a06f37962f329855188063903d332714fbd881bac46e
a guy that works at stacker news

pv make something interoperable

Because they prefer to use the underlying primitive rather than new syntactic sugar? Is their job command line interface design?

I see. To date, I’ve mostly been considering more naive algorithms.

I don’t love the opaqueness of machine learned algos but once the models are trained at least they’re pretty small and applying them is straightforward.

Replying to Avatar fiatjaf

#[0]

That you Nassim? Profile hasn’t loaded yet

Which sophisticated algos?

I proposed this to roughly gauge sentiment on the idea. It very well could be the wrong approach.

Ideally, I think algos are clientside. Napkin math tells me it's going to be 100-1000x slower than a client displaying the normal relay provided ordering assuming the algo runs near instantly. But it's probably worth seeing if this is actually the case first.

Some algos are more obviously unfeasible on the client (on a basis relative to the server). e.g. a chronological (not reverse chronological) ordering, a top for some large duration of time ordering, search. However, some algos where you can assume you only need a subset of data, might make more sense.

Love the new zap ux on damus

Honestly, I don't know.

If algos are not standardized then relay operators that want algos will pool all messages from all relays to provide the algo with a full result set (they already do this). They do this because as a user I don't want to see the best stuff from just 1/10th of my followings, just like I don't want to use a search engine that searches 1/10th of the internet in its own way, and then I use another search engine for each other 1/10th.

You can get "consistent" algo results one of two ways on nostr afaict:

1. pool notes on mega relay, run your special algo

2. on that notes that each relay has, run a standardized algo, combine them on the client

According to The Man I'm wrong about something. I can't tell what though.

Good catch. Adding a note to the gist

The args could support it, right? I was thinking the algorithm could determine how pages should work, but having something generic would be cool too.

Bloom filters could be a way to probabilistically say “I’ve already seen these ‘pages’ *probably*” … bloom filters could also be drafted in a way that it helps for all req queries and prevents clients from redownloading events across relays.

Not really. If I wanted to give you a Twitter like feed, purely on the client, I’d need to grab all events over the last day from your followers, and all events on those events (zaps, reactions, comments, reposts) - 10s of thousands of events potentially. Then I’d perform an algorithm on all that data on a potentially resource constrained device. It’d be really slow and most of that data is discarded because the algo determined it was irrelevant.

You already trust your relay not to remove things you want to see. The hope in nostr design is “at least one relay I query won’t remove that message.”

You could extend that to algorithms potentially; at least one of these relays will give me an honest result (assuming there’s some ability to reconcile results … say only accepting results that roughly agree with the others). You could also provide the data necessary to verify the algorithm’s result, particularly if the algorithm is specified.

It’s kind of early to say how it should work exactly but specifying algorithms will reduce trust in any particular relay which is the point … even if you can’t reduce the trust to zero.

Is this a reasonable approach to algorithms on nostr?

https://github.com/nostr-protocol/nips/issues/522

Why is wrestling the only kayfabe sport?

gm make something bitcoiners want ... offchain