Docker creates more problems than it solves
Discussion
That seems to be the general trend of modern technology.
How so?
Just another layer to learn/troubleshoot/have incomplete support for (cf render.com docker volume support)
I don't know man. Once you're using it frequently, it helps tremendously with scaling.
I hope you’re trying to be ironic… Docker and Kubernetes are the most invaluable tools at every stage of the development process, to deployment and operations.
It has its utility, but it should not be abused like it is now.
an example would be better ...
I feel your frustration. Have been there. But that statement is not true and you know it.
No, I've been using docker for 10 years off and on, it doesn't solve any real problems (that I personally have)
Different problems with easier fixes. It seems like a fair deal.
all innovation introduces new tradeoffs
🤦♂️🤦♂️
I have felt the same way. It seems to have a place, but also has gone too far in many others.
Devs that don’t understand devops should not be trusted.
Preach
podman tho
But seriously I've had lengthy discussions with some big time devs on HN about this. Specifically that docker was a rough solution to the failure that is application/library packaging.
Exactly. The problem is having a reproducible environment. Those that didn’t live the wild days of “works here until I go to deploy it for the client” don’t know what a blessing this is.
I could spend days talking about good/bad, but one very useful thing ive found besides testing is development testing.
It's just a replaceable environment. I can pull a fedora image to test something, ubuntu, alpine, whatever. Use --rm and it's gone when I'm doing playing with it.
Nice new app you have there, let me pull it real quick to check it out. Okay I'll consider deploying it to my network. Didn't leave a trace on my host system, didn't have to run any install/uninstall commands etc.
It seems like these two things should be decoupled, right? Some kind of provisioning code (like Ansible?) combined with a clean environment. But docker combines the two. If you have control over the actual environment (e.g. using the same VPS), you don't need a clean sandboxed environment. Plus, a lot of software platforms already have tools to normalize an environment. Do I really need to run my nodejs server behind nvm, installed in a container, running on a VPS?
My original note was inspired by working with render.com's Docker mode, which is redundant because they already have a clean environment to run the package on, and having the extra layer makes it difficult to do simple stuff like use mounted discs.
Well in that would be serverless deployment right? The case where when my build completes it triggers deployment of the application, and the environment is provisioned by a configuration file as part of the source/build. You wouldn't want your application to have control over the OS, and a whole VM is more resource intensive to provision than a container within an VM. You woudln't want it configuring a network stack or an interface, or loading drivers and so on. You just want your dependencies to be where you need them and always be there.
> My original note was inspired by working with render.com's Docker mode, which is redundant because they already have a clean environment to run the package on, and having the extra layer makes it difficult to do simple stuff like use mounted discs.
Sure, but what happens when you want 3, or 4 instances running on the same machine? Well at best that's another script, at worst that requires human intervention during a deployment. As far as your mounted disk thing, im not sure I understand what trouble your running into, if it's a file-path, then it's usable by docker.
Ideally you provision the server itself to have a consistent state, and your applications (multiple of them) share the system state as much as possible. It's far more resource efficient that way. VMS generally need thick resource provisioning to make guarantees, that applications don't need or want.
I use multiple kinds of network shares all the time with podman. Even virtiofs as well. No complaints.
Yeah, I get that containers allow the use of shared resources on a single host. The problem is that the same pattern has been used (and nested) in many places where it makes no sense. The render problem is that they support deploying dockerfiles, but not configuring the command to run the container on the host, which means you can't map volumes. The workaround is also really weird: https://community.render.com/t/map-disk-to-docker/4707/5
I just read their persistent disk docs. Yeah that's a limitation of render and their filesystem policy, you can't even share the filesystem points, which is a nicely working feature of containers obviously. I guess they intend users rely on other persistent data methods for more complex sharing, which is probable the enterprise use case. Use S3, or a database don't use the filesystem.
To be fair though there are other reasons they may limit this. Often VPS providers use ramfs to keep things snappy because most apps don't need much space, and if you need a DB or some other large storage system they offer they separately. So they may provision X GB of storage, but it's sitting on an array somewhere so they map some, or all storage to memory, so they don't want people actually using the storage they pay for because they can't make the same performance guarantees. That's speculation based on what i've seen for small VPS (like 25gb or less of persistent storage)
That makes a lot of sense, and yeah, a lot of this comes from me trying to deploy strfry/lmdb on a PaaS 😂
I come from a hardware first background so much of this cloud stuff is still new to me as well.
Please elaborate on this statement.
I'd say the same things for Kubernetes, I like Docker though, and Docker-compose