Yes.
In theory, I agree that there is often little need to reinvent the wheel, and practice DRY. I've basically been a library dev for the past 5 years and that's been my focus. C specifically doesn't have any concept of package management by design, it's portable at the binary level (or literal translation unit (.c -> .o), no other modern language is that I know of, by default. C# has assembly files which are PFE so semi portable but not machine code, rust can be made to produce binaries, but are not built on the basis of binary objects. Point is, C is "portable" at the OS/maintainer level, not source level. Unless the package is well distributed on an OS platform, generally it's best to vendor the package.
I choose vendoring for many reasons
- I want to be transparent and be able to offer a better guarantee of the EXACT product a customer receives and prove it was tested exactly as they receive it
- project maintainers must check the changes in, by hand. I have to hand verify the changes secp256k1 introduces and take that responsibility for my customers
- repeatable builds are great for homogeneous apps, but polyglot requires a mess of build tools, my build containers are like 3-5gb, and testing natively on windows requires hours of setup without an image, and still take 10+ minutes to build on a 32 core machine.
- ALL of the source used is signed by me
- Doesn't require any 3rd party servers or trust, if you have a signed archive and build tools, it's all you need
- Doesn't require strict API/ABI contracts, while ideal, most immature projects don't have any api contracts/guarantees and introduce breaking changes even in patch versions
- Allows me to release my own updates by using and verifying upstream changes and get things out faster. Huge core libraries like argon2, zlib (cloudflare fork), brotli, rpmalloc, mimalloc, haven't released a tag in many years despite patches and upgrades still being committed.
- Unlikely to introduce regression due to dependency changes (especially in immature projects) because source is verified, tests are great, but they're never truly exhaustive
Cons
- that's a lot of responsibility and regular work
- codebase gets large and git gets slow
- unless verified, maintainers can alter the code (like I do) and it's not reviewed by the original maintainers and their downstreams, so it can require trust
- updates can happen less frequent
I give users a choice, then can
- use my packages (binaries) [easy]
- build from source [hard]
- plug the original, unmodified, library in or add it to the linker args and build [hard]
All of this said, you are right, I still use some well-maintained libraries on nuget in production, and same with npm. I try to keep them minimal, but i'm not really a "web" developer so I'm going to rely on npm. It's a culture problem imo. The same reason I vendor 8 c files and maintain my own makefiles, and the reason I will just run `npm install tailwind` is the problem. I care far less about the 80 dependencies on the UI than I do for the 8 c files that will be executing raw machine instructions on my customer's processors. It's my area of focus. I probably would focus more, if it wasn't so easy to npx run x. The focus on JS has been rapid development from the outside looking in. Just run this command, just add this package, just do X. It's that simple! Convenience > security.
I apologize if I became too sycophantic and that I attempted to volly the ball your way.