What's the reason you're self hosting kubernetes? Isn't that overkill?

Reply to this note

Please Login to reply.

Discussion

What mleku said - and, I have _way_ too many nodes under my control now. A RockPro64, NanoPi R6s, VisionFive2 and a 4 vCPU VPS with Hetzner (Ampere ARM - Not to be confused with the NVIDIA one). So... Building a cluster feels like the next logical step o.o Also... if I can build good and reusable deployments, I can publish them as Helm charts which may or may not end up coming in handy for future Nostr hosting. Lord knows how Primal and the others host their stuff; chances are, when Nostr grows, it'll need to scale - and this is something Kubernetes is quite good in. :) So while solving my own little conundrum, I might as well put together something that others can re-use down the line.

I manage 5 k8s clusters at work, but they're all in aws so it's way easier.

NY home lab has proxmox on it, so Im just running an lxc per service. So far I have the automation for the initial setup if a gitea lxc and ansible and I have it spinning up one more lxc.

LXC is so neat! I am using that to rebuild my bitcoin/cln node, right now. Far less ephemeral than a container, but way less overhead-y than a VM. Its super neat! Though I use Incus; iirc, it's an LXC fork so... not too different.

I really wanna try Proxmox in the future, it seems like an amazing HCI solution!

It's been fun to play around with. I've only been doing the home lab stuff for a few months, but my intention was just to use start9, but within the first week I realized it didn't give me the control I wanted, so I've been working on this.

Once I get a few service spun up and migrated off my start9 vm into these individual containers, I want to start experimenting with ways abstract the state of the LXCs the that I could see what happens when have one instance of each DB flavor running on proxmox and then connect each app to a segregated DB on those so DB management is centralized which would enable a simplified DR solution.