I much prefer configuring some minimal yaml and letting the system deal with scheduling containers. throw an operator like fluxcd in there and point it at a repo of yaml and the workflow for ops is quite nice.

I just hate debugging when k8s itself is unhappy. even with k3s where there’s no etcd, ive still had mtls certs expire (why) ultimately locking me out. at this point my personal ops have regressed to systemd units and shell scripts.

Reply to this note

Please Login to reply.

Discussion

I miss my old workflow. deploying a new app was as simple as copying a dir in a repo, replacing the image and ingress.host, and commit/push.

k3s does have etcd for HA, if you want to suffer

I somehow managed to kill the etcd instance and cause the cluster to permanently fail because I restarted masters too many times in a test deployment.

I am honestly reconsidering my plan to use etcd for a few services

I hit an edge case where I ran only with external IPs with the Hetzner CCM as well.

Somehow, the NodeHosts config in the integrated CoreDNS was not properly set by K3s because it wanted only internal IPs.

None of my nodes had internal IPs, so CoreDNS hung until I manually created the config entry

Other things include if you remove the IPv6 external address of a node in a dual-stack config, that node will go into a perpetual crash loop.

If you use Hetzner CCM and it is not set to dual-stack, it will take your entire cluster down.