Never looked into them deeply. They are more transparent than Umbrel, but they also use the expose command in Dockerfiles.

I am at a point where this is for my use, I will discard it.

So crazy that probably 90% of docker containers have ports exposed by default.

A mystery why you’d do that unless it is a port meant to be open to the internet, which most ports aren’t.

You can always open a port in compose or when running the container, if needed. But just opening all ports to the host is crazy, especially if maybe someone else may use your container.

Reply to this note

Please Login to reply.

Discussion

Sorry, this is beyond my breadth of knowledge. Could you elaborate? I’d love to know what you mean, on a more basic level. Ie what’s the risk, and what’s the tradeoff? Thanks!

If you expose a port in a docker container, it is open on the host.

If the port is eg 80 for a Webserver that’s fine.

If it’s a control port port then that’s asking for trouble.

Many container projects used to open all ports the application uses, for convenience of newbies, so they can just say “docker run container” without any options.

The usual purpose for containers is to be deployed via compose, swarm or Kubernetes, which can open all the ports.

As I mentioned docker can overwrite firewall rules.

I had cases of finding a port open to the internet even though my compose had port specifications that didn’t include it and the port was blocked in ufw.

So the best practice is to never use expose in a Dockerfile, use the deployment to handle the port openings and in general keep as much of the ports closed.

Docker has internal Networking between containers and reverse Proxies are also usually a better option for routing traffic from the web to your container.

Thanks!