Nostr homeservers are a better idea than many people realize, I think.

This would basically act as a layer between relays and your client and help handle more computationally intensive tasks. (+ you would not leak your IP to random outbox relays)

You could always move to another one, compare multiple for censoring, or run your own. (it can even be integrated as a fallback)

Reply to this note

Please Login to reply.

Discussion

You mean like a client backend that runs on a separate server? I have a nostrudel instance on my start9. If nostrudel did a lot of tasks in the backend, that would be what you're talking about, right?

Kind of.

You would basically change the current model of smart client <=> dumb relay to

somewhat dumb client <=> smart homeserver <=> dumb relays/smart services

The thing is that the “homeserver” can be in the client if you want, giving you full flexibility. But you could also only use it rarely to check for censoring, and use less battery + have more capabilities.

Or offload harder tasks to it.

I think it would be really cool to offload components that handle tasks, replicated on both the home server and the client, or either one. Like change the route of a method to point to the client or to the home server at will. Kind of like how Plan9 From Bell Labs is decentralized and can have addresses point to the same machine or a different machine entirely but perform the same function.

Like a file directory might point to a script that runs on this machine or that machine, and you can change the directory tree at will depending on the dynamics of the task at hand, the network conditions, the usage of the CPU, etc.

Incidentally I am designing an AI that runs on a similar kind of paradigm. Intended to be localized into one machine with many modular components but that has this kind of flexibility between components built in.

Hell maybe I should make it run on the nostr protocol. I was thinking of using the 9P protocol though.

I feel like the dumb *server* model is one of the biggest mistakes made. It cannot be denied that servers are significantly more powerful than a browser or a phone to do any task.

This doesn’t mean you have to be locked into a central server, be able to fall back or self-host your own mini server

Yeah. I think it should also be a priority to make servers easy to run and modular AF. But yeah for a long time I was wondering when server caching and pre-indexing for various things would catch on in more clients. I prefer to be able to have a smart client be everything I need, but options would be nice. Various algorithms, note prefetching, and caching could be assisted by a smart server.

To make it interoperable between clients, you can call the server a new relay type, specialized for augmenting resource intensive client tasks. Users could choose them just like regular relays, and can have multiple fallbacks, including the client's own capabilities.

Would require a couple of NIPs of course

What if instead of tethering your account to one set of very specific servers you could broadcast some delegated work (along with some sats) that any server could do for you? Like a Bitcoin tx.

I don't think the added benefit is worth the decrease of resilience against censorship/servers going down for whatever reason (i.e. some ISP intern causing some BGP fuckup somewhere).

And that is why you can switch or integrate it into the client.

And that can affect relays too.

Actually, that would require prior setup to the downtime event.