Local environment specifically isn't fully containerized in my project. DB and similar things (elasticsearch, message queue) locally are inside containers, but the code itself is not. I worked on a project before where I had to have it containerized and it was a slow mess. I'd rather spend a couple more hours on setting up local dev environment for every new hire than deal with code in Docker locally.
In production we have it done the other way - PostgreSQL and Elasticsearch are run directly, but code is in containers.
To be honest, I have a fairly similar situation, I just use a different code-container for local than for production. In production we run some things directly and package the code, whereas in local we package the services, and have the code semi-separate.
In the production environment, I want the code image to be set in stone, that way a deploy or rollback will go to the exact git commit that I expect. So the CI-script for deployment is just a `docker build` command, the dockerfile of which clones a specific commit hash and runs dependency installation (yarn install, etc.), then sets the image version in the production env variables. The code is then encapsulated in 1 image, which is used as a volume for other containers, and the runtimes are each in their own container, connected by docker-compose.
For local, it's a much heavier code image that I've prebuilt that contains our current version of every tool we use, so that the host machine needs nothing but docker installed to be able to do anything. The services that actually display stuff on the screen (Node.js) run as their own container with their own processes, but you can hop into your code container (used as a volume for the services) and try out command line Node stuff there, without fear of killing the procs that show your local environment.
It took a long time to reach this point, lots of experimentation, but it's now pretty lightweight and pretty useful too.
Hmm, not sure I understand why you put code in one image and then use it as a volume for other containers. Why not run directly from the volume with the code?
Composability whilst maintaining a monolith codebase. The code in its single imageis used in 3 different environments, without including any runtimes that those environments might not need, keeping the image small. At the same time all code can be kept in a single git repository.
I guess thousands of developers would pay for a fast docker implementation on mac. File access is so slow if you want to mount your sourcecode into the container.
There are solutions like docker-sync but they have a kind of random delay, sometimes the sync happened fast sometimes it takes a few seconds.
I also do this. We have DB and another service running via docker-compose. But our actual Webpack typescript app is done locally. We run on OSX, and due to the Docker file system slowness, the dev-test-run deploy cycle is far too slow. Much better running it outside of Docker.
In production we have it done the other way - PostgreSQL and Elasticsearch are run directly, but code is in containers.