Why would you use #SystemD rather than #Docker for running #strfry? nostr:npub1tcekjparmkju6k83r5tzmzjvjwy0nnajlrwyk35us9g7x7wx80ys9hjmky

What the hell is wrong with you? 🤣

Reply to this note

Please Login to reply.

Discussion

No ARM64 image x)

cd .../strfry && git pull && make update-submodules && make && systemctl restart strfry

vs. the whole docker command. I just like it more, feels simpler this way. Most other things I use run in docker, but stuff that's really just a smol program don't need the overhead imo. And, well, arm64. Not everyone publishes images for that.

Having issues with all the #Linux crap is the actual overhead. To be honest, calling #Docker having overhead is a hot air argument from #Linux fanbois.

Once you download a base image, like Alpine, it only takes less than 5MiB size & this is shared across all your image derivatives. So, the overhead is so negligible, you could even use #Docker for the tiniest programs, for the sake of not dealing with all the #Linux crap.

Okay, just checked, the default image is based on Ubuntu, so that's definitely bigger.

From the packages it seems like it could be translated into Alpine.

I could create a Pull Request for that.

When that gets accepted, would you please use #Docker? 😂

... Maybe. ;)

I am just a lazy dude. Heck, I wrote scripts to update caddy with modules and stuff just so I don't have to type them all out and run Watchtower so I can just update containers automatically - useful for something like Jellyfin which doesn't need a lot of attention, if at all, when updating.

Alpine would also be my personal choice; the OpenWrt thing I run on my homeserver is not Alpine, but shares some of the basics (musl, size oriented, etc.). I like it, and actually use apk to get whatever is not in the openwrt repo xD

Docker does not remove all the Linux “crap” - it just hides it. Arguably that works fine in many cases, but I have seen many where it didn’t.

So in development and for small setups (basically 99% of all setups you will do on your own) I prefer not using docker.

Have worked at companies with 40 devs, none proficient in Docker, using it for dev setup. We spent so much time on debugging and rebuilding containers, it was ridiculous.

As you just pointed out yourself, that's a skill issue.

If you don't read yourself in & go into containers with a 90's #Linux attitude, you *will* definitely fail.

I agree.

I did not say it is "removed", though. :)

It also does not "hide" it.

https://www.howtogeek.com/733522/docker-for-beginners-everything-you-need-to-know/

Not sure, which resource explains how #Docker works in the best way, but essentially, a container is derived from an image, which (container) only depends on the host's Kernel. Everything else is the container's doing.

Naturally, you usually wouldn't want ultra low level programs to be run inside a container, but this wouldn't make much sense, either way.

Everything else is fine inside containers.

As for your statement regarding "development" & "small setups" I have to oppose that strongly, in its entirety.

I barely setup any app without a Dockerfile, because it's a waste of time & I also do not want to pollute my servers with some Dev crap, because what feels like every Dev on GitHub must create his own shit, without respecting generalised standards.

Especially #Frontend projects are extremely invasive & annoying.

If the project does not deliver a ready made Dockerfile, I create one myself, ad-hoc. No problem.

The only times you get issues with Alpine, is if the maintainers did some nasty stuff, like depending on specific non-portable APIs, etc. Then yeah, you might not be able to `musl` that stuff.

However, even then, you can still use Debian or Ubuntu as a fallback base image, which will work for everything.

If there is ever something that does not work at all inside a #Docker container, it's in 99.9999999999999% of cases the Dev's fault for programming the project in the most non-portable, system-dependent & weirdly discaring way possible.

Like, when they inherently hardcode #SystemD calls into the program. I mean, come on, what the hell is wrong with you...

Needless to say, I naturally run #Kubernetes on my amd64 servers & self-made Docker Compose setups on my #RaspberryPi's.

Here you go. The multi-arch build thing is still missing, because that needs to be discussed with the maintainer, as there's already a #PullRequest open for that.

nostr:nevent1qqsqq32zydef8k344r7jzme2766337k6hkkf4z57qsxh0hf5jk4lmksppamhxue69uhkummnw3ezumt0d5pzquzxkcw2zduy4ckl2ccpepsrkgncyesvkkug43qelmf7cufywluqqvzqqqqqqyydcaq4

Saw it!

Rest is up to the dev to merge those in, though I don't see there being a problem.