I may have drank too much of the cloud juice

- Proxmox is not ideal for large-scale deployments with cattle

- K8s is complex but the complexity pays off 100x

- Route 53 just works

- CF Workers is good, sometimes

- R2 and B2 are the best object storage

- Hetzner is great for a lot of stuff

- EC2 as usual is a scam

Reply to this note

Please Login to reply.

Discussion

Ive never ran any large scale websites so I don't know.

> - R2 and B2 are the best object storage

Big fan of B2

> Proxmox is not ideal for large-scale deployments with cattle

I can't even imaging running proxmox on anything but metal. Even then you must have a homogeneous collection of machines (not hardware, but purpose)

> K8s is complex but the complexity pays off 100x

That's what they keep telling me.

> Route 53 just works

Never used but interested.

About the point with Proxmox, that was on metal *but* cattle-type workloads.

Also yes it took me a few hours to spin up a K3s cluster (primarily reading the docs) and only an hour to then deploy a multi-node Pulsar and ClickHouse cluster

And I endup with plain old libvirt.

Opinion on incus?? I’m considering using it to manage containers for a medium sized setup without a huge amount of volume ..

Hetzner doesn't have hosted k8s though

the secret is kubernetes is popular because it pays massive dividends when you’re a large org running containers across thousands of nodes.

most people don’t have these problems.

I agree. Also, you get everything that was built around k8s for deploying and managing applications.

I’m not at that big of a scale, yet, but it is crossing into the point where manual management is not feasible.

And importantly, most of my workloads are already compatible with a containerized and distributed architecture easily.

It took about an hour for me to configure Apache Pulsar and ClickHouse on a k3s cluster.

I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice.

I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.

I miss my old workflow. deploying a new app was as simple as copying a dir in a repo, replacing the image and ingress.host, and commit/push.

k3s does have etcd for HA, if you want to suffer

I somehow managed to kill the etcd instance and cause the cluster to permanently fail because I restarted masters too many times in a test deployment.

I am honestly reconsidering my plan to use etcd for a few services

I hit an edge case where I ran only with external IPs with the Hetzner CCM as well.

Somehow, the NodeHosts config in the integrated CoreDNS was not properly set by K3s because it wanted only internal IPs.

None of my nodes had internal IPs, so CoreDNS hung until I manually created the config entry

Other things include if you remove the IPv6 external address of a node in a dual-stack config, that node will go into a perpetual crash loop.

If you use Hetzner CCM and it is not set to dual-stack, it will take your entire cluster down.

Where does docker swarm land on your list?

did not test

Would recommend. It’s like a mixture of docker compose and kubernetes but more lightweight than k8s and better suited for smaller deployments