Like a file directory might point to a script that runs on this machine or that machine, and you can change the directory tree at will depending on the dynamics of the task at hand, the network conditions, the usage of the CPU, etc.

Reply to this note

Please Login to reply.

Discussion

Incidentally I am designing an AI that runs on a similar kind of paradigm. Intended to be localized into one machine with many modular components but that has this kind of flexibility between components built in.

Hell maybe I should make it run on the nostr protocol. I was thinking of using the 9P protocol though.

I feel like the dumb *server* model is one of the biggest mistakes made. It cannot be denied that servers are significantly more powerful than a browser or a phone to do any task.

This doesn’t mean you have to be locked into a central server, be able to fall back or self-host your own mini server

Yeah. I think it should also be a priority to make servers easy to run and modular AF. But yeah for a long time I was wondering when server caching and pre-indexing for various things would catch on in more clients. I prefer to be able to have a smart client be everything I need, but options would be nice. Various algorithms, note prefetching, and caching could be assisted by a smart server.

To make it interoperable between clients, you can call the server a new relay type, specialized for augmenting resource intensive client tasks. Users could choose them just like regular relays, and can have multiple fallbacks, including the client's own capabilities.

Would require a couple of NIPs of course

What if instead of tethering your account to one set of very specific servers you could broadcast some delegated work (along with some sats) that any server could do for you? Like a Bitcoin tx.