> The separate environment thing ensures you get the exact versions of the libraries that are guaranteed to work with the tool,
1. I think it's the wrong goal.
2. It's easier to achieve that in ways that don't require bloat on my system.
It's much nicer for users if the library interfaces with its dependencies respecting the versioning rules: doesn't use undocumented, unreleased stuff, doesn't rely on undocumented side-effects etc. Only relies on what's tested and released. In this way, there's no need to be very selective about versions you have.
Unfortunately, this is not how the world is. The reality is that especially in popular environments like Python, you get a crapload of very low-quality libraries, with poorly defined dependencies, with people who don't understand the infra side of things and have convoluted dependency requirements. But, I usually try to fight back. If I absolutely have to have a library with convoluted requirements, I fork it and fix the nonsense. Or vendor it.
Another unfortunate quite ironic side-effect of this situation is that people are so dedicated to specifying nonsense requirements (eg. including patch in requirement specification even though Python doesn't even have a patching mechanism) is that popular programs used to install libraries are optimized for these absurdly specific requirements. I.e. it's faster to install requirements with pip or conda if you give them the exact list, preventing the solver from putting it much work. This puts people who want to make good libraries at a disadvantage because their libraries will take forever to install.
Which leads me to the following conclusion: if I want super-precise requirements, I don't need pip or conda. I can simply curl -o the packages I need, and it will be much faster and a lot more reliable.
> without risk of upgrading a library in a way that breaks something else.
I'm not afraid. I usually know what I'm installing. If it breaks, I'll fix it. I actually want to know when and why it breaks, so this is also an anti-feature for me.
> A compression based file system
Do you mean deduplication / CoW filesystem? Not sure why is this a hack. Compression in filesystems typically compresses individual blocks. It won't help you if different files have the same contents, the theoretical benefit comes from entropy within a file, not because some files share contents.
1. I think it's the wrong goal.
2. It's easier to achieve that in ways that don't require bloat on my system.
It's much nicer for users if the library interfaces with its dependencies respecting the versioning rules: doesn't use undocumented, unreleased stuff, doesn't rely on undocumented side-effects etc. Only relies on what's tested and released. In this way, there's no need to be very selective about versions you have.
Unfortunately, this is not how the world is. The reality is that especially in popular environments like Python, you get a crapload of very low-quality libraries, with poorly defined dependencies, with people who don't understand the infra side of things and have convoluted dependency requirements. But, I usually try to fight back. If I absolutely have to have a library with convoluted requirements, I fork it and fix the nonsense. Or vendor it.
Another unfortunate quite ironic side-effect of this situation is that people are so dedicated to specifying nonsense requirements (eg. including patch in requirement specification even though Python doesn't even have a patching mechanism) is that popular programs used to install libraries are optimized for these absurdly specific requirements. I.e. it's faster to install requirements with pip or conda if you give them the exact list, preventing the solver from putting it much work. This puts people who want to make good libraries at a disadvantage because their libraries will take forever to install.
Which leads me to the following conclusion: if I want super-precise requirements, I don't need pip or conda. I can simply curl -o the packages I need, and it will be much faster and a lot more reliable.
> without risk of upgrading a library in a way that breaks something else.
I'm not afraid. I usually know what I'm installing. If it breaks, I'll fix it. I actually want to know when and why it breaks, so this is also an anti-feature for me.
> A compression based file system
Do you mean deduplication / CoW filesystem? Not sure why is this a hack. Compression in filesystems typically compresses individual blocks. It won't help you if different files have the same contents, the theoretical benefit comes from entropy within a file, not because some files share contents.