Honestly, I don't know.

If algos are not standardized then relay operators that want algos will pool all messages from all relays to provide the algo with a full result set (they already do this). They do this because as a user I don't want to see the best stuff from just 1/10th of my followings, just like I don't want to use a search engine that searches 1/10th of the internet in its own way, and then I use another search engine for each other 1/10th.

You can get "consistent" algo results one of two ways on nostr afaict:

1. pool notes on mega relay, run your special algo

2. on that notes that each relay has, run a standardized algo, combine them on the client

According to The Man I'm wrong about something. I can't tell what though.

Reply to this note

Please Login to reply.

Discussion

So it's essentially a map/reduce mechanism where (1) performs both on relay and (2) maps on relay and reduces on client; curious how much room for innovation would a standardization allow for.

Much like Bitcoin Script it would be interesting to consider a constrained query DSL to write these mappers/reducers with an upper bound on compute, and allow a fee market to arise for clients requesting to execute these different algos

Map reduce is an interesting lens to view this through. That might make pretty good sense depending on the algorithm and lots of prior art to pull from.