I've apparently decided to go passed balls deep into more sysadmin stuff lately. I just did a migration to move to IaC for all of my load balancers. Added some testing, some staging, fun branch protection and authorization rules, post deploy testing, storing the previous configs as build artifacts XD.

Reply to this note

Please Login to reply.

Discussion

IaC is great.

Turns out when I can just make a feature branch for adding a new upstream, or adjusting my routing rules, or add a new site is a lot of fun and much easier to test and deploy.

What do you use for IaC scripting?

Terraform is the most robust solution, but newer options like Pulumi, CDK, or even Crossplane might be better depending on your use case and environment. There is no โ€œbestโ€ โ€” itโ€™s all about trade-offs.

This current setup is basically just shell commands and my OneDev server. I typically use ansible but I wanted to do it "by hand" this time. I'll probably switch back to ansible when I get comfortable with this workflow. Keep in mind I'm managing virtual machines cloud and metal.

Cool, I want to get into Ansible too soon

It's fun, especially since you can use your favorite IDE to tell AI: "hey, can you setup my inventory file and some playbooks to update all my machines and run an audit..." and so on. It's fun. It's not defining machines but interacting with them in yaml lol

I was interested in terraform, but it didn't seem very useful for provisioning metal without a "messy" orchestration system running. I'm sure Ill still need it, but for now I want to get over the kubernetes hump and see if I can work from there.

Keep going. Given the mess that is our industry and the fact that C Suites won't br backing down on the AI stuff any time soon, SREs will be making a ton of money and then some.

"past" lol ๐Ÿ˜†

Thanks for noticing XD

Now imagine doing all of this manually. By having apprentices do it.

Thats my dayjob and the cause of my burnout. (And deperession. xD)

While I can't relate, I can imagine XD.

I have one step over a home datacenter, so I'm not building anything massive here, but now that I've stepped up, yeah It's gonna be hard going back. Only a small handful of customers, but I wanted to give them more stability, without going full cloud.

Sooooo felt. I have been sitting here scrutinizing everry U in my 12U rack - and the cents in my bank account - to built a homelab to be fully and entirely self-sovereign. Goal is to literally live fully selfhosted - with little to no reliance on SaaS. x) It's big fun, also big annoying.

And then I go to work and see how people manually VPN (OpenVPN) and then RDP into Windows Servers to set a DNS record on the Domain Controller and I am like, bruh. XD

Though my favorite is our "SRE". We run Grafana with OnCall. But the actual workflow of resolving incidents is often calling a customer, asking them to pop a TeamViewer QS and then do stuff that way. It's hilarious - and embaressing.

Or, possibly my favorite, is that the chef's kid was to buy servers for our customer - and he bought an Intel one. Why? Because it had hot-swap fans. Yes, it was 2x more than the equivalent EPYC, and it was ment as a Hyper-V host so there'd have been gains in MT perf compared to Intel's ST perf. But nooooo can't hotswap the fans there! XD

Meanwhile I am just hanging out in the Radxa Discord, working towards dkms drivers for npu/vpu/gpu and just hoping I get to use this low-level knowledge some day - between the kernel, kubernetes, Podman/Docker (and many of the internals like CSI, CNI, container runtimes) and semiautomation - for something epic.

I mean last task was literally going to a client to run virus scans because they got a little scared. Hours of literally doing nothing and almost falling asleep _at_ the customers office...

> Sooooo felt. I have been sitting here scrutinizing everry U in my 12U rack - and the cents in my bank account - to built a homelab to be fully and entirely self-sovereign. Goal is to literally live fully selfhosted - with little to no reliance on SaaS. x) It's big fun, also big annoying.

I'm trying to live that dream!! And then be able to publish my work without anyone's permission. I ended up with cloud l4's to hide my physical location/IP address, that's it. I use L4 in the cloud so I hold my TLS private keys.

I used to have a nearly full 42 while experimenting 5 or so years ago but after a move and physical constraints I just recently got a 28u rack back up and got a bunch of "newer" equipment.

I can relate to the fan thing, for me, I have accumulated so much Dell equipment and spare parts I will pretty much only search for used Dell machines because I have enough spare parts to keep my operation moving. I used to buy and sell equipment for a little while, helped my pay for college and food XD. That said, I've never had a fan fail in a critical machine fingers crossed.

Big fan of podman, I'm working on k3s, I want to move totally over to an HA kubes setup, but it's a bear to learn kubes.

I _had_ to learn k3s specifically when I got to my current job. The first month was literally a tiny bit of onboarding, and then the person that was then-currently managing the cluster and Linux infra...being unceremonosly (<- how do you type that actually? xD) fired - and me being told to take over their duties... in full.

So, k8s.io docs, cover to cover it was. XD Could say that the real apprenticeship I am having is as a Kubernetes admin, if anything, because thats the only new stuff I learn. o.o

With L4 in the cloud, you probably mean Layer 4...so, a load balancer? I just have a VPS for 8 bucks that reverse-proxies home through a Headscale VPN. How exactly did you configure your l4 to "phone home"? o.o Wireguard or something?

Yes, layer 4. Nginx as a stream proxy pointing to home directly. No vpn. The only purpose is to hide my IP address. I then configure my firewalls to listen explicitly for the IP addresses of the L4 proxies.

My cloud provider when down last weekend for like 14 hours so I decided to configure another L4 in the US-west datacenter. So now I have us-east and us-west. I then also decided to add another L7 proxy and use the L4s to distribute connections across the two at home.

Got a hardware question for you. Im considering upgrading my workstation with a Dell r7415/25 chassis. But I don't know what processor(s) make the most sense in one of these things. I have GPUs to stuff into it to replace an old R720 im using now.

Im looking for 2u workstation performance +gpu on a budget. Also open to opinions on other chassis if you have experience with them. Id like to stay under $1500, max of $2000