Having worked on package management in all the verticals you’ve mentioned, none of what you said is true.
Packages in most ecosystems are fetched over HTTP and those packages disappear. If you’re lucky those packages are stored in a centrally maintained repository like npm, distro repos, etc. If you’re unlucky it’s a decentralized system like early go where anyone can host their own repo. Anyone running builds at scale have caches in place to deal with ecosystem weirdness otherwise your builds stop working randomly through the day.
Re: Go, good luck getting a go package from 10 years back to compile, they directly addressed the repository the code lived in! This was a major problem for large projects that literally failed and were abandoned half way through the dev cycle because their dependencies disappeared.
Re: Docker - Good luck with rerunning a glorified series of shell scripts every build. There’s a reason we stopped doing ansible. When you run simple shell scripts locally they seem infallable. Run that same script over 1000s of consecutive builds and you’ll find all sorts of glorious edge cases. Docker fakes reproducibility by using snapshots at every step, but those are extremely fragile when you need to update any layer. You’ll go to rebake an image from a year ago to update the OS and find out the Dockerfile won’t build anymore.
Apt is a glorified tarball (ar-chive) with a manifest and shell scripts. Pkg too. Each with risks of misplacing files. *nix systems in general all share a global namespace and YOLO unpack an archive followed by running scripts with risk of irreversibly borking your system during an update. We have all sorts of snapshotting flows to deal with this duck tape and popsicle stick approach to package management.
That package management in pretty much any ecosystem works well enough to keep the industry chugging along is nothing short of a miracle. And by miracle I mean many many human lifetimes wasted pulling hair out over these systems misbehaving.
You go back and read the last two decades of LISA papers and they’re all rehashing the same problems maintaining packages across large systems deployments with little real innovation until the Nix paper.
Packages in most ecosystems are fetched over HTTP and those packages disappear. If you’re lucky those packages are stored in a centrally maintained repository like npm, distro repos, etc. If you’re unlucky it’s a decentralized system like early go where anyone can host their own repo. Anyone running builds at scale have caches in place to deal with ecosystem weirdness otherwise your builds stop working randomly through the day.
Re: Go, good luck getting a go package from 10 years back to compile, they directly addressed the repository the code lived in! This was a major problem for large projects that literally failed and were abandoned half way through the dev cycle because their dependencies disappeared.
Re: Docker - Good luck with rerunning a glorified series of shell scripts every build. There’s a reason we stopped doing ansible. When you run simple shell scripts locally they seem infallable. Run that same script over 1000s of consecutive builds and you’ll find all sorts of glorious edge cases. Docker fakes reproducibility by using snapshots at every step, but those are extremely fragile when you need to update any layer. You’ll go to rebake an image from a year ago to update the OS and find out the Dockerfile won’t build anymore.
Apt is a glorified tarball (ar-chive) with a manifest and shell scripts. Pkg too. Each with risks of misplacing files. *nix systems in general all share a global namespace and YOLO unpack an archive followed by running scripts with risk of irreversibly borking your system during an update. We have all sorts of snapshotting flows to deal with this duck tape and popsicle stick approach to package management.
That package management in pretty much any ecosystem works well enough to keep the industry chugging along is nothing short of a miracle. And by miracle I mean many many human lifetimes wasted pulling hair out over these systems misbehaving.
You go back and read the last two decades of LISA papers and they’re all rehashing the same problems maintaining packages across large systems deployments with little real innovation until the Nix paper.