My thinking on this recently turned things around in a sense. These platform have a luxury Nostr has not, in there being a complete global state of all the things on a 'single' database.

This means that before you can even filter, you need to explore. Now i guess the main underlying thesis is that with 'dead internet TM' we are forced to do this regardless (eventually), and that the platforms only provide an increasingly crumbling facade of a sensible world. Bias is not just the way we achieve finding the content we prefer, it is how we differentiate signal from noise, the real from the fake by distributing trust via the social graph, in the first place.

My point is, the 'pick your own algo' meme is not some cool feature as a result of liberating ourselves from the platforms; it is the unfortunate necessity as a result of the impending wave of chaos that would otherwise engulf us; something platforms won't save us from, regardless how totalitarian they become in an attempt to keep their facade alive.

Then again, I started out by saying ‘recently’, but in a sense I have just been spinning my wheels for over a year

nostr:nevent1qqsy85zcjahjvyxwkd5clx7mw62ukgf5dyvhksn9d5398gyu9erdz6qprdmhxue69uhhyetvv9ujuumwda68ytnwdsargwfe8yuj7q3qt6jxfqz9hv0lygn9thwndekuahwyxkgvycyscjrtauuw73gd5k7sxpqqqqqqztfyf2h

Reply to this note

Please Login to reply.

Discussion

A shorter way of saying it would be:

Will: 'put the power back in the hands of the user'

Translation:

There's ye olde argument that the economics of electricity and chips means duplication of labour vis a vis crawling and indexing must be aggressively minimised for a wider solution to achieve any sort of longterm viability. If you've got dozens of Nostr clients all individually crawling and indexing the same relays (as the basis for each client's 'pick-your-own-algo' feature-slash-unfortunate necessity) then it represents quite some potential heat-loss overall. Friendly sharing can help, but, outside of the right incentive structure, might be hard to extend beyond early days.

An interesting takes on that challenge here:

https://link.springer.com/article/10.1007/s41019-024-00263-w

I will have a look.

The way i see things now is that each individual client (nor user using multiples of clients for that manner) wont have to perform such exercises over and over again each time. Running such an operation should result in a product (simply put a list of events), which can then be used by others.

Also, these operations can vary in terms of debth and width, adjusting to usecase irt available compute and bandwidth.

At nostr:nprofile1qqsr3gwphg38qcy5lpzd2vphk3apdnfaywkpk8nq4yljkthqn33y6ncppamhxue69uhku6n4d4czumt99u9pjye9 we call this type of operation 'pulse'; a ripple through the mess of events out there guided by a construct of biasses on npubs and lists.

In any event, i gues my main argument would be that computational efficiency is irrelevent because due to spam, data curration (signal/noise diffirentiation) will be the #1 challenge and i'd argue to only way to tackle that is in in a distributed manner (i.e. relying on a network and networks of networks of people applying sensemaking for themselves). Any walled garden will either be too limited or run over by weeds with nothing in between.

To confidently offer what Primal offers every client would have to do what Primal is doing now, that is to say crawl and index everything. The same way Bing has to duplicate Google's (very expensive) work of crawling and indexing. There's just no getting around that at present, sans goodness-of-my-heart solutions. (Cooperative frameworks like this Espresso are well-envisioned but in early days, and address more the technical than the incentive-structure side of things.) This crawling and indexing also by nature applies to spam that has gotten past a given relay's own filter.

This isn't so much about performing a computational event each time a user makes a query, as the bulk of crawling and indexing is done in anticipation of a query; minutes before, days before, years before. Rather this is about maintaining a foundation upon which algorithms of a certain type can be run. So we're talking about a related but somewhat different set of tasks here.

Basically, in so far as this set goes, a 'Primal-like' user experience cannot scale on Nostr without either duplication of effort (every client sends out its own Googlebot or Bingbot) or a bandaid solution whereby the clients that are doing the costly work of crawling and indexing (the Primal or Primals) provide some kind of goodness-of-my-heart access to the fruits of their labour, and the clients that are making use of this access cross their fingers and hope things all carry on.

Ah, we are talking past eachother i see.

Yes, primals approach is retarded and wont work. I was not catching on you were refering to that (that discussion might be all the rage right now, but was not part of the context here, so thats why).

I was refering to/explaining something different entirely

All good, thanks for the chat!

On that debate I think Primal is doing what makes sense for their users and business, and I'd go that direction too if I were Primal. Just that this development will nudge the Nostr ecosystem as a whole in another direction, as other clients can't be expected to have the budgets to do the same, nor can they bank on Primal's perpetual good graces.

This new direction I think will be one where crawling and indexing (and a patched-together global view in general) is much less relevant. Like with Linux, how moving from the consumer operating system direction to the server direction suddenly made graphical interfaces (which were a weak point anyway) much less relevant.

Even both: Ask the librarian for a book AND walk through the library to pick one yourself.