Most active commenters
  • adev_(7)
  • alexvitkov(3)

←back to thread

A critique of package managers

(www.gingerbill.org)
109 points gingerBill | 20 comments | | HN request time: 0.698s | source | bottom
1. adev_ ◴[] No.45168218[source]
The argument here is (in brief) "Package management is hell, package managers are evil. So let's handle the hell manually to feel the pain better".

And honestly speaking: It is plain stupid.

We can all agree that abusing package management with ~10000 of micro packages everywhere like npm/python/ruby does is completely unproductive and brings its own considerable maintenance burden and complexity.

But ignoring the dependency resolution problem entirely by saying "You do not need dependencies" is even dumber.

Not every person is working in an environment where shipping a giant blob executable built out of vendored static dependencies is even possible. This is a privilege of the Gamedev industry has and the author forgets a bit too easily it is domain specific.

Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.

Some of us are developing libraries/frameworks that are used embedded in >50 products with other libraries with a hell of multiples combinations of compilers / ABI / platforms. This is not something you want to test nor support without automation.

Some of us have to maintain cathedrals that are constructed over decades of domain specific knowhow (Scientific simulators, solvers, Petrol prospection tools, financial frameworks, ... ) in multiple languages (Fortran, C, C++, Python, Lua, ...) that can not just be re-written in few weeks because "I tell you: dependencies sucks, Bro"

Managing all of that manually is just insane. And generally finishes with an home-made half-baked bunch of scripts that try to badly mimic the behavior of a proper package manager.

So no, there is no replacement for a proper package manager: Instead of hating the tool, just learn to use it.

Package manager are tools, and like every tool, they should be used Wisely and not as a Maslow's Hammer.

replies(2): >>45168243 #>>45168695 #
2. zahlman ◴[] No.45168243[source]
I mostly agree, but

> Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.

Does this somehow prevent you from vendoring everything?

replies(2): >>45168264 #>>45168291 #
3. pjc50 ◴[] No.45168264[source]
It certainly gets in the way. The more dependencies, the more work it is to update them, especially when for some reason you're choosing _not_ to automate that process. And the larger the dependencies, the larger the repo.

Would you also try to build all of them on every CI run?

What about the non-source dependencies, check the binaries into git?

4. adev_ ◴[] No.45168291[source]
> Does this somehow prevent you from vendoring everything?

Yes. Because in these environment soon or later you will be shipping libraries and not executable.

Shipping libraries means that your software will need to be integrated in other stacks where you do not control the full dependency tree nor the versions there.

Vendoring dependencies in this situation is the guarantee that you will make the life of your customer miserable by throwing the diamond dependency problem right in their face.

replies(1): >>45168428 #
5. alexvitkov ◴[] No.45168428{3}[source]
You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.

In the game development sphere, there's plenty of giant middleware packages for audio playback, physics engines, renderers, and other problems that are 1000x more complex and more useful than any given npm package, and yet I somehow don't have to "manage a dependency tree" and "resolve peer dependency conflicts" when using them.

replies(2): >>45168582 #>>45168704 #
6. adev_ ◴[] No.45168582{4}[source]
> You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.

And you just don't know what you are talking about.

If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.

This is not up to me to fix or choose the library and the driver version that the customer will use. He will choose the certified version he will ship, he will test my software on it and integrate it.

Vendoring dependency for anything which is not a final product (product as executable) is plain stupid.

It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.

If you want to vendor, do vendor, but stick to executables with well-defined IPC systems.

replies(1): >>45168812 #
7. gingerBill ◴[] No.45168695[source]
I am not sure how you got this conclusion from the article.

> So let's handle the hell manually to feel the pain better

This is far from my position. Literally the entire point is to make it clearer you are heading to dependency hell, rather than feel the pain better whilst you are there.

I am not against dependencies but you should know the costs of them and the alternatives. Package managers hide the complexity, costs, trade-offs, and alternative approaches, thus making it easier to slip into dependency hell.

replies(1): >>45169319 #
8. zahlman ◴[] No.45168704{4}[source]
When you're a library, your customer is another developer. By vendoring needlessly, you potentially cause unavoidable bloat in someone else's product. If you interoperate with standard interfaces, your downstream should be able to choose what's on the other end of that interface.
9. alexvitkov ◴[] No.45168812{5}[source]
> If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.

If you're writing an ADAS system, and you have a "dependency tree" that needs to be "resolved" by a package manager, you should be fired immediately.

Any software that has lives riding on it, if it has dependencies, must be certified against a specific version of them, that should 100% of the time, without exceptions, must be vendored with the software.

> It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.

The exact opposite. Vendoring is the ONLY way to prevent the ABI madness of "v1.3.1 of libfoo exports libfoo_a but not libfoo_b, and v1.3.2 exports libfoo_b but not libfoo_c, and in 1.3.2 libfoo_b takes in a pointer to a struct that has a different layout."

If you MUST have libfoo (which you don't), you link your version of libfoo into your blob and you never expose any libfoo symbols in your library's blob.

replies(1): >>45169042 #
10. seba_dos1 ◴[] No.45169042{6}[source]
You keep confirming that you don't know what you are talking about.

The vendoring step happens at something like Yocto or equivalent and that's what ends up being certified, not random library repos.

replies(2): >>45169207 #>>45169660 #
11. adev_ ◴[] No.45169207{7}[source]
Yes exactly.

And in addition: Yocto (or equivalent) will also be the one providing you the traceability required to guarantee that what you ship is currently what you certified and not some random garbage compiled in a laptop user directory.

replies(1): >>45177090 #
12. adev_ ◴[] No.45169319[source]
> I am not against dependencies but you should know the costs of them and the alternatives.

You are against the usage of a tool and you propose no alternative.

Handling the dependency by vendoring them manually, like you propose in your blog, is not an alternative.

This is an over simplification of the problem (and the problem is complex) that can be applied only to your specific usage and domain.

replies(1): >>45169421 #
13. gingerBill ◴[] No.45169421{3}[source]
It is an alternative, just clearly not one you like. And it's not an oversimplification of the problem.

Again, what is wrong with saying you should know the costs of the dependencies you include AND the alternative approaches of not using the dependencies?—e.g. using the standard library, writing it yourself, using another dependency already that might fit, etc.

14. alexvitkov ◴[] No.45169660{7}[source]
"Vendoring step" You cannot make this shit up.

You're providing a library. That library has dependencies (although it shouldn't). You've written that library to work against a specific version of those dependencies. Vendoring these dependencies means shipping them with your library, and not relying on your user, or even worse, their package manager to provide said dependencies.

I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, but if they're not certifying the "random library repos" that are part of your code, I pray I never have to interact with your code.

replies(2): >>45170254 #>>45171245 #
15. adev_ ◴[] No.45170254{8}[source]
> I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, [..], I pray I never have to interact with your code.

You illustrate perfectly the attitude problem of the average "gamedev" here.

You do not know shit about the realities and the development practice of an entire domain (here the safety critical domain).

But still you brag confidently about how 'My dev practices are better' and affirm without any shame that everybody else in this field that disagree is an idiot.

Just to let you know: In the safety critical field, the responsibility of the final certification is on the integrator. That is why we do not want intermediate dependency to randomly vendor and bundle crap we do not have control of.

Additionally, it is often that the entire dependency tree (including proprietary third party components like AUTOSAR) are shipped as source available and compiled / assemblied from sources during the integration.

Thats why the usage of package manager like Yocto (or equivalent) is widespread in the domain: It allows to precisely track and version what is used an how for analysis and traceability back to the requirements.

Additionally again, when the usage of binary dependencies is the only solution available (like for Neutrino QNX and its associated compilers). Any serious certification organism (like the TUV) will mandate to have the exact checksum of each certified binary that you use in your application and a process to track them back to the certification document.

This is not something you do by dumping random fu**ng blob in a git repository like you are proposing. You generally do that, again, by using a proper set of processes and generally a package manager like Yocto or similar.

Finally, your comment on "v1.3.1 of libfoo" is completely moronic. You seem to have no idea of the consequence of duplicated symbols in multiples static libraries with vendored dependencies you do not control nor the consequences it can have on functional safety.

16. seba_dos1 ◴[] No.45171245{8}[source]
> I don't know what industry you work in

I dabbled my fingers in enough of them to tame my hubris a bit and learn that various fields have specific needs that end up represented in their processes (and this includes gamedev as well). Highly recommended before commenting any further.

17. BobbyTables2 ◴[] No.45177090{8}[source]
Did Yocto ever clean up how they manage the sysroot?

It used to have a really bad design flaw. Example: - building package X explicitly depends on A to be in the sysroot - building package Y explicitly depends on B in the sysroot, but implicitly will use A if present (thanks autoconf!)

In such a situation, building X before Y will result in Y effectively using A&B — perhaps enabling unintended features. Building Y then X would produce a different Y.

Coupled with the parallel build environment, it’s a recipe for highly non deterministic binaries — without even considering reproducibility.

replies(2): >>45178260 #>>45179108 #
18. 1718627440 ◴[] No.45178260{9}[source]
> but implicitly will use A if present (thanks autoconf!)

When you want reproducibility, you need to specify what you want, not let the computer guess. Why can't you use Y/configure --without-A ? In the extreme case you can also version config.status.

replies(1): >>45192842 #
19. adev_ ◴[] No.45179108{9}[source]
> Did Yocto ever clean up how they manage the sysroot?

It's better than before but you still need to sandbox manually if you want good reproducibility.

Honestly, for reproducibility alone. There is better than Yocto nowadays. It is hard to beat Nix at this game. Even Bazel based build flows are somewhat better.

But in the embedded world, Yocto is pretty widespread and almost the de-facto norm for Linux embedded.

20. BobbyTables2 ◴[] No.45192842{10}[source]
One certainly can, but such is not default.

Things using autotools evolved to be “manual user friendly” in the sense that application features are automatically enabled based on auto detected libraries.

But for automated builds, all those smarts get in the way when the build environment is subject to variation.

In theory, the Yocto recipe will fully specify the application configuration regardless of how the environment varies…

Of course, in theory the most Byzantine build process will always function correctly too!