The basic concept, as I understand it, is that the more isolated each thing is, the easier the maintenance, changes, etc will be. Let's say Fulcrum needs way more RAM; I could just move that container to a different host machine without reconfiguring anything else.
Or let's say I'm trying to install some tool or app, I follow instructions to install all these dependencies, and then find out that info was outdated or that those dependencies cause other headaches. Oops, too late, already inserted those dependencies into way too many places to try to undo. Even worse if I did that in the Linux OS that runs my main bitcoin node.
I've restarted a Docker + mempool install twice now by blowing it away and starting clean. It just frees you to really iterate through try, fail, retry cycles.
Great explanation, thx. Still I would think that it consumes alot more hardware resources, the more Linux lxcs you deploy. And don't you have the same dependency isolation with docker?
A rough, bad 10,000ft view is that Docker gives you virtual OS envs within your computer (ignoring clustering stuff I don't understand) but Proxmox gives you a further layer of abstraction where the real-time hardware your virtual OS is running can be moved around at will with no interruption.
Gotta shut down Server A for some reason but that hosts your production bitcoin node? No prob, just pointy-clicky proxmox stuff (I haven't gotten to this level yet) and move your live node to Server B.
As for Docker, it does occasionally have breaking changes in new versions. Project X is for Docker v.foo but Project Y requires v.bar.
Thread collapsed
Thread collapsed