That's not really the same thing. What about different applications that share dependencies? With Docker, they would have to share an exact subset of a linear Dockerfile to take advantage of any deduplication, so you're back to image layers. It's telling to me that when proposed solutions involve completely avoiding one of Docker's primary features that maybe Docker isn't very well designed to begin with.
With Guix, any sub-graph that is shared is naturally deduplicated, because we have a complete and precise dependency graph of the software, all the way down to libc. I find myself playing lots of games with Docker to take the most advantage of its brittle cache in order to reduce build times and share as much as possible. Furthermore, Docker's cache has no knowledge of temporal changes and therefore the cache becomes stale. Guix builds aren't subject to change with time because builds are isolated from the network. Docker needs the network, otherwise nothing would work because it's just a layer on top of an imperative distro's package manager. Docker will happily cache the image resulting from 'RUN apt-get upgrade' forever, but what happens when a security update to a package is released? You won't know about it unless you purge the cache and rebuild. Docker is completely disconnected from the real dependencies of an application, and is therefore fundamentally broken.
Docker needs network only when one uses Dockerfile for deployment, a rather bad idea. Instead Docker images should be used. It allows to verify them on development/testing machine before deployment. And with this setup all "bad pieces" of Docker are located at developer's notebook. In production everything is read-only and shared among all containers.
With Guix, any sub-graph that is shared is naturally deduplicated, because we have a complete and precise dependency graph of the software, all the way down to libc. I find myself playing lots of games with Docker to take the most advantage of its brittle cache in order to reduce build times and share as much as possible. Furthermore, Docker's cache has no knowledge of temporal changes and therefore the cache becomes stale. Guix builds aren't subject to change with time because builds are isolated from the network. Docker needs the network, otherwise nothing would work because it's just a layer on top of an imperative distro's package manager. Docker will happily cache the image resulting from 'RUN apt-get upgrade' forever, but what happens when a security update to a package is released? You won't know about it unless you purge the cache and rebuild. Docker is completely disconnected from the real dependencies of an application, and is therefore fundamentally broken.