>I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.
Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.
>Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.
Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.
>And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).
If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.
OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.
>OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.
Let's see. On one hand there is more compile time, disk usage, bandwidth usage, RAM usage required for static linking. On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource. It seems to me that static linking is rarely appropriate for most applications and systems.
> On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource.
But, at least as implemented on mainstream Linux distributions, at the cost of "DLL Hell" that makes releasing reasonably granular libraries on a reasonably granular schedule essentially impossible, as per the article.
I'm all for dynamic linking in theory, but the way the likes of Debian do it makes the costs too high.
>But, at least as implemented on mainstream Linux distributions, at the cost of "DLL Hell" that makes releasing reasonably granular libraries on a reasonably granular schedule essentially impossible, as per the article.
The article is about problems associated with Rust. I think Rust is too new of an ecosystem to have mature developers committing to stable versions of libraries and applications which depend on stable libraries, both of which are essential to making dynamic linkage work.
Linux doesn't have DLL Hell generally. That term comes from the Windows world where there are systems in place to store every version of every DLL ever seen, because even DLLs with the same version may not be interchangeable. That is truly hellish.
It absolutely does, especially when there are incompatible upgrades. People still talk about libc.so.6 issues. Whenever you upgrade a system library you can get breakage, only last week I had to deal with that kind of problem (on a Debian system no less).
> there are systems in place to store every version of every DLL ever seen, because even DLLs with the same version may not be interchangeable. That is truly hellish.
The fact that they've put those systems in place makes it much better positioned than Linux IMO. The implementation may be ugly, but ultimately windows programs continue to work even in the face of incompatible library changes, and without having abandoned dynamic libraries entirely. I think it's possible to do dynamic linking right (perhaps with something like Nix), but as implemented on traditional Debian-style distributions the cost is too high.
RAM is still a limited resource. Bloated memory footprints hurt performance even if you technically have the RAM. The disk, bandwidth, and package builder CPU usage involved to statically link everything alone is enough reason not to do it, if possible.
Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.
>Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.
Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.
>And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).
If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.