Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm familiar enough with Docker to know of it as a combination of lxc, cgroups, and probably other things so that I can have 1 machine, 1 kernel, and yet multiple userspaces. These userspaces are not (as i understand it) Securely Isolated from eachother, but enough so that if there existed some monstrosity of a complex piece of software, which required lots of dependencies and customization, it might make sense to put it in a chroot, or a docker for the CoW benefits.

But what I'm not following (and again, I don't get the point of Docker, I don't use it, so in trying to learn I'm assuming you must know more...) is how it assists provisioning the VM as you say. Sure, it could _change_ the provisioning of the _host_ (i'm calling the inside of the docker container the host in this context). But it's not like the binaries being executed in the container is the mac operating system. It's a VM that within THAT is the mac operating system.

If I have mac running on a VM on a linux host, I still need to log in to that mac guest to configure networking, execute apps, install software.... So how did adding docker to the picture make it easier?

Hence my confusion.



You're able to create files in Dockerfile, that could be used for configuration + the stuff about networking. AFAIK macOS has a textmode as well that should work similarly to other Unixes, but I'm not sure about that, but if so, you should be able to execute commands just like with a Linux VM. Yes, there'd need to be a bridge between the macOS VM and the Linux docker container.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: