This false sense of reproducability is why I funded https://docs.stablebuild.com/ some years ago. It lets you pin stuff in dockerfiles that are normally unpinnable like OS package repos, docker hub tags and random files on the internet. So you can go back to a project a year from now and actually get the same container back again.
Isn't this problem usually solved by building an actual image for your specific application, tagging that and pushing to some docker repo? At least that's how it's been at placec I've worked at that used docker. What am I missing?
Build and tag internal base images on a regular cadence that individual projects then use in their FROM. You’ll have `company-debian-python:20250901` as a frozen-in-time version of all your system level dependencies, then the Dockerfile using it handles application-level dependencies with something that supports a lockfile (e.g. uv, npm). The application code itself is COPY’d into the image towards the end, such that everything before it is cached, but you’re not relying on the cache for reproducibility, since you’re starting from a frozen base image.
The base image building can be pretty easily automated, then individual projects using those base images can expect new base images on a regular basis, and test updating to the latest at their leisure without getting any surprise changes.
At that point you're doing most of the work yourself, and the value add from Docker is pretty small (although not zero) - most of the gains are coming from using a decent language-level dependency manager.
It's not, but at that point you're giving up on most of the things Docker was supposed to get you. What about when you need to upgrade a library dependency (but not all of them, just that one)?
I'm not sure what the complication here is. If application code changes, or some dependency changes, you build a new docker image as needed, possibly with an updated Dockerfile as well if that's required. The Dockerfile is part of the application repo and versioned just like everything else in the repo. CICD helps build and push a new image during PRs, or tag creation, just like you would with any application package / artifact. Frequent building and pushing of docker images can over time start taking up space of course but you can take care of that by maybe cleaning out old images from time to time if you can determine they're no longer needed.
Docker re-uses layers as needed and can detect when a new layer needs to be added. It's not like images grow in size without bound each time something is changed in the Dockerfile.