I used to do all my development in VMs but found they were too heavy-weight, too slow and too difficult to keep "clean". Docker doesn't give you the same isolation at the resource level, which is a major benefit of VMs, but I also found this was more of a hinderance than a help. For me sharing the artifacts of development or the dependencies (i.e. building Docker images or using containers for parts of the system not under development) is where the value is, vs a completely isolated, reproducable software/hardware environment. A baseline VM is still great for reproducing production issues, and avoids the "Works on my machine" badge.
I followed the given instructions on my MacOSX 10.15.7 following brew installs of vagrant and virtualbox, but running vagrant up resulted in
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
...
Timed out while waiting for the machine to boot. This means that Vagrant was unable to communicate with the guest machine within the configured ("config.vm.boot_timeout" value) time period.
I cannot find any config.vm.boot_timeout in Vagrantfile
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'debian/bookworm64' version '12.20240212.1' is up to date...
==> default: Machine 'default' has a post `vagrant up` message. This is a message
==> default: from the creator of the Vagrantfile, and not from Vagrant itself:
==> default:
==> default: Vanilla Debian box. See https://app.vagrantup.com/debian for help and bug reports
almost immediately. Looks like timing out was not the real issue. I can run vagrant ssh now.
Realizing there's no cc and no git, I try
$ apt-get update
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
I was a heavy Vagrant user many years ago, but I thought the project was mostly dead due to changes in business model or focus from HasiCorp, IIRC, and of course, the popularity of Docker.
Also, isn’t Vagrant dependent on VirtualBox? Because the latter doesn’t run on ARM Macs.
I used Vagrant and VMs heavily for customer projects between 2013-2016.
Added benefit was mainly some sort of workspace isolation since you could simply “vagrant in” you different projects via WebStorm/PHPstorm in parallel.
I was given a MacBook Pro 13” with then massive 16gb exactly for this reason: to handle VMs via Vagrant.
Before that, everything was kind of tricky and fragile, mostly relying on a LAMP stack without isolation.
What kind of sucks about ARM Macs is that the pretty darn good x86 emulation seems to be hidden behind a proprietary wall, at least as far as I know.
It sure would be nice if Apple provided a nice interface into their emulation for various projects like VirtualBox / Bochs / PCem / Qemu / DOSBOX / VMWare / Bochs / WINE to plug into.
It also kind of sucks that the entire overall body of PC / DOS / Windows / x86 emulation and virtualization is locked in all these silos despite the open source nature. The problem probably is that there are so many gotchas to document and cross-annotate across the projects that it basically is impossible without some dedicated team of very talented technical documenters.
You can do it, Asahi Linux has access to a binary blob of Rosetta, or something like this, IIRC.
The thing is I don’t really care. I was worried I’d need to run Windows or Linux X86. Turns out, I don’t. I only had to run Windows once for a few moments in the last 3 years since I had an Apple Silicon. Surprising, actually.
Added a footnote suggesting a workaround with Parallels for any confused ARM Mac users out there, searching desparately for this fabled Virtualbox program. Appreciate the catch.
I like Multipass because it does actually work pretty well, but its purpose is to run Ubuntu VMs:
> Ubuntu VMs on demand for any workstation
Which may be suitable for many Linux-based tutorials, but not necessarily all. For instance, we deploy software to RHEL (historical reasons more than anything else) at work and might want a RHEL-based tutorial for things unique to administrating it versus administrating Ubuntu or other distributions.
I'd say nix is the more deterministic way to get all, and only, the things you absolutely need. Docker has the same issue as vagrant that you'll get a different debian depending on the day you init the VM, run apt, and so on.
I deem that you are correct! I have investigated NixOS in a Vagrant + Virtualbox combo very similar to what I did in this post, and yes, this is a much more deterministic solution to this idea.
If the people you are writing a tutorial for can be expected to be familiar with Nix, this is probably a much more future-proofed solution. That's still a small camp of people at the moment, but it does include the creator of Vagrant himself, Mitchell Hashimoto. I maintain that a specific Debian version is a better "lingua franca" for a general audience - that may change in coming years, of course.
I have begun writing a new tutorial called "NixOS: Three Big Ideas" to expose people like myself who haven't given Nix the time of day to explore its concepts in about 20 minutes or so.
It depends. Depends on a whole lot of things, and Nix isn't special in this:
Do you re-use an existing artefact or do you build a new one?
You can put Nix in Docker and in Vagrant and get the same result every time. You an also not use Nix, create a Docker image or a Vagrant box, and whenever you instantiate one of those you get the same result every time. You an also involve Nix and not pin to a specific release and now you get a different result every time. That would be the same as creating a Dockerfile with a mutable tag or Vagrant file with a dynamic setup.
Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need.". That is something that is probably only really useful in CI when you need reproducible builds. For everything else you probably want to be tracking a stable release for the version you target. Especially since most people aren't working to get a deterministic result, but a 'close enough' result to get the job done.
In nix you can pin your environment down to a commit in nixpkgs, from which whenever you build your system (i.e, artefact), the result will be the same. How do you achieve this in docker or vagrant?
> Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need."
IMO, you do, as otherwise you'll need to fix additional issues every time your CI runs, no?
In docker and vagrant you do it the same way, the hash of the specific artefact. That can be a commit ID, but you can also do this without Git and the likes and use a content hash. Those are guaranteed to always refer to the same thing.
>> Usually you don't really want "the more deterministic way to get all, and only, the things you absolutely need."
>IMO, you do, as otherwise you'll need to fix additional issues every time your CI runs, no?
That is why I wrote about the specific exception for CI, but since we're talking about humans experimenting in a box, we're not talking about CI.
> In docker and vagrant you do it the same way, the hash of the specific artefact.
No, in nix you pin the commit hash of nixpkgs, which gives you the exact same package set every time. But with a Docerfile, even if you add it to your project's repo then any `apt-get install something` will still give you a different something depending on when you build the image. In nix, you always get the same something.
> That is why I wrote about the specific exception for CI
I meant that you keep the dev environment the same as CI because if you develop in a different environment than the CI then whenever CI runs then you'll get new issues (because you weren't using the same environment).
I wasn't referring to Nix, how Nix uses hashing, what specific hash, or source implementation it is using, or that it might be using an artefact hash. I was referring to the referencing method: a hash. This is the same method in the three mentioned products (nix, docker, vagrant): an immutable pointer to a specific resource.
As for CI, dev environments, not relevant here. I only mentioned in passing because CI is a common use case for hardcoded immutable reference. We're talking about someone wanting a box to experiment in, not CI. Not even software engineering either.
SHA256, generally. So for example, you might use 5c15a6e5c0bf02e6c0eaa939cb543c41d7725453064c920b9a4faeea7c357506. (This is a digest hash you'd use in Docker, as an example)
You can both consume that specific one, but also reproduce it locally if you want to. Depending on your intent, bandwidth and trust you can pick the method you desire.
I mean hashes of what, not which hash algorithm. If you build the same Dockerfile, then I don't think you'll get the same hash, so I'm not sure what the use of that hash would be. (Did you mesn a hash of some other thing?)
The hash of the image. So you cannot get the same hash if you change something in the image. This is also why you have reproducible builds, the source should always result in the same hash, otherwise the build would not be reproducible.
So building the same Dockerfile twice gets you the same hash. This does of course not work if you change something each build, like pulling in some random dependencies that you didn't also pin to a hash, or if you don't use a fixed point in time for the filesystem. We also sign the images so you get some protection against (future) hash collisions. Works for about 900 different images in our internal registries so far. We also used nix, but only a handful of developers actually enjoyed it, so that project got killed off.
You can use any search engine and search for "Dockerfile reproducible build" if you want to learn more.
You get the same hash only if you build the image when none of your dependencies were updated in the mean time. If you `apt-get install something` in a month, the resulting image will most likely not have the same hash.
Docker itself doesn't really get you reproducible builds. I assume you haven't actually tried to achieve it if you still believe it. Docker is like any arbitrary linux machine you start installing things on: you need some other system to get reproducibility.
We have used reproducible builds for many years, and we haven't had issues with it. Of course you're going to get random results when you issue random commands (like apt-get install), which is why you don't do that if you want reproducible builds.
It's also not a normal workflow to build the same image many times, except for validation, and you do that with frozen sources, not arbitrary references. You build the image, sign it, distribute it. It doesn't get rebuilt on the destination.
We build on official releases and add our own artefacts. So we might pin to a microsoft dotnet hash, a JVM hash, a nodejs hash, and in our pipeline add a released application to that container. There are no packages installed, only files added, including an entry point. This is then signed and stored, along with the source commit and source image.
This way we can reproduce at will, but also make use of the very large ecosystem of suppliers, consultants and communities that already exist.
Edit: the other resources (source, source image) are also packaged as OCI images and signed and stored so you have everything available in the registry, even if the source repositories were to cease to exist. Because everything else we run interfaces with OCI registries by default we don't have to re-invent that wheel.
Reproducible build means that you get your source code from somewhere, it has dependencies listed, you install those, then you compile your artifact from your source, and the resulting binary or container is always the same. You can delete them, start from source in six months and it will still be the same.
If you want to do that, you'd need to keep your artifact around, and I guess you'd also need to back up the docker images you depend on.
>create a Docker image or a Vagrant box, and whenever you instantiate one of those
I'm passing X to doubt.
This is just odd analysis. Nix is strictly superior to Docker, except the learning curve. You have to really work at it to make any modern Nix setup non-reproducible.
You can pass X as much as you like, but Nix, like anything else, can be configured with dynamic references. Nix is also not related to Docker (neither superior nor inferior), and the learning curve makes it unsuitable for everything where Docker and Vagrant are suitable, even if it would technically do the same thing.
Either way, you are missing the point: the problem here isn't having perfect reproduction. It's about having an environment you can learn in without having to worry about breaking things. That's why the author references vagrant since it's about as low a barrier to entry as disposable environments can be (and even still a barrier to high for some).
> but Nix, like anything else, can be configured with dynamic references.
Tell me you don't know what you're talking about, without telling me you don't know what you're talking about.
If you can do this, with flakes, or in pure-eval mode, it's a stop-everything bug in Nix.
Also "Nix isn't the box" again betrays how little you know about Nix. I can take a Nix program and package it into a VM or OCI with literally zero effort. Similarly I can transform any NixOS configuration into a VM for literally any platform I can image, similarly trivially.
Like, I literally can type a single command and boot my current machine configuration in a completely disposable VM. I can trivially build any rev of my machine over the past 3 years and get an *IDENTICAL* nearly bit-for-bit replica of that machine from any point in time.
You can do all of that with any programming language, that is not a special trait of Nix. Your lack of insight into what people actually want betrays your need to promote nix. You can also use nix to automatically make sure that you install the latest version of something, and that means that if you rebuild at different times you get different results. That is working as intended, but having looked around this is because we use niv everywhere because we really don't want to waste time on manual version management, that's what computers are for.
Edit: come to think of it, this is probably why we stopped using nix for new projects. Having to wrap nix a lot to make it work with reality makes it a bit pointless.
Anyway, Nix isn't the box, and it still will never be the box. It doesn't matter that it could be the box, because the people that want the box are not the people that can utilise nix. This Venn diagram just doesn't intersect.
As for your insistence that it's bit for bit guaranteed: you do really think that anyone in the world who is just looking for a box gives a crap? It doesn't matter how deterministic it is, or how nix does it better than anything else, because the entire factor of determinism doesn't matter in this case.
Sorry, but this is just nonsense. You have to intentionally use Nix in a way that no one does, in a way that is highly discouraged in every corner of the community, in order to get unreproducable behavior. Furthermore, that mis-usage does not apply to any way that Nix is actually shared amongst peers (a nix-shell or flake).
The only way this is true is if you're sending around a list of pacakges, telling users to install them manually, not pinning nixpkgs, not using a flake. Aka, a situation I've literally never seen or heard of. Ever. Worst case a user comes and asks how to make `nix-env -i` portable and the community collectively gasps and discourages the behavior.
I'm not advocating anything. I acknowledged Nix has a learning curve. I have countless issues opened for UX nags. I'm combating outright FUD and ... I don't want to say... at this point.
>As for your insistence that it's bit for bit guaranteed: you do really think that anyone in the world who is just looking for a box gives a crap? It doesn't matter how deterministic it is, or how nix does it better than anything else, because the entire factor of determinism doesn't matter in this case.
Move those goalposts and ignore the other benefits that spoke to the desire use-case, again, sure!
EDIT: Oh brother, a niv user. Never mind, ignore my post, you probably already know it and don't care. Not worth the time for either of us. Implying that using niv is harder, or somehow not superior to using Docker is really a hot take. I am sad I ever entered this thread.
EDIT2: I'll leave this here and stand by it 110% and I'll insist it proves my point:
>If you can do this, with flakes, or in pure-eval mode, it's a stop-everything bug in Nix.
I think it's you who is missing the point: the learning environment won't build as neither docker nor vagrant will give you reproducible environments, while nix will. Sure, you can make your nix environments dynamic too, but having determinism is its core feature. I don't think it is with docker nor vagrant, as the environment you get will be isolated, but not deterministic. You can, of course bolt some custom thing on top of those (or use nix), but that is going to be a pain.
EDIT:
Both vagrant and docker are useful tools, it's just that for creating any dev environments nix will be better due to determinism.
The point is, and always was: someone wants a box to experiment in. Nix is not that box. It was never the box. It will never be the box. That is the point I am making and reflects the point the author was making.
And just in case there is a language barrier: box does not refer to a specific technical implementation of a box, it's just a term to denote a border between the user's system (which they don't want to break) and "something else".
Edit: and just in case the word 'someone' trips over a language barrier, I'm not referring to 'anyone' but to the persona (the 'someone') who might want to try something out because they saw something and thought it was cool. Not someone with package manager experience, not a software engineer, not a sysadmin.
We're also not 'building a learning environment'. Learning environment is a proxy for disposable environment, which as a relatively simple concept is already a step too far for the average "how do I become a cool hacker like on TV" case (which is where the unreasonable effectiveness from the title comes in) where someone might take their first steps and wanting to try something out without breaking their current environment.
Not in this case. In this case we want a general computer user to perform an action to have a 'computer in a box' that they can break with no consequences with a very low barrier to entry. And we want that action to be easy to share.
Nix also gives you (not me, just in case you are going to assume that) a headache, which is why it's not getting the adoption that everything else does get. Nix is the betamax of state managers.
Docker gives you the headache of not actually giving you reproducibility. If you haven't been bitten by it then I don't think you have tried.
For popularity, I wouldn't actually mind if nix weren't all that popular, but it may just have the largest number of packages available for install for any package manager.
I haven't heard of anything getting a Docker-related headache because it delegates reproducibility to whatever tools you pick. We haven't been bitten by it, and you might think I have not tried, but you'd be wrong and making some odd assumptions.
Perhaps I can assume that you have only used nix personally or at a small company?
You can avoid that unpredictability if you don't use the latest tag, or, better, create images or start containers from SHA256 digests. Those are guaranteed to be immutable.
I.e instead of:
Docker run --rm debian:trixie
You'd do
Docker run --rm debian:trixie@sha256:0ee9224bb31d3622e84d15260cf759bf37f08d652cc0411ca7b1f785f87ac19c
The only disadvantage of the digest approach is that you would need to manually resolve the digest that's correct for your processor arch. Using bare tags like "debian:trixie" can resolve to manifest lists (if so configured) that has Docker automatically find the right digest for your arch.
Also, at least for Debian there are official images available that use snapshot.debian.org as package repository from the get-go – unfortunately those images are not published on a daily basis yet.
> But if you read those links, you'll see that it's not quite there yet.
Anything you're referring to specifically? I think I made it pretty clear that Docker images based on package repo snapshots are not fully "there" yet.
> Docker in general can't solve reproducibility - it's the package manager within any container that does that.
No doubt, but right now the issue is that the package managers have done their part and the Docker images need to catch up.
Then you would create a Dockerfile that uses the digest SHA you want as a parent image, add your packages, build a new image from that Dockerfile with an updated tag, and push the image into the registry. a digest will be created for it once uploaded into its registry.
If you are not uploading images into registries (why?), you can use "docker save" to turn that image into a tarball and then checksum the tarball.
Re Docker, chroot, etc: the boundary around a VM is both easier to understand and more hermetic than around a container.
The learner already has a pretty good idea of where a computer stops. They can pretty much transfer that knowledge to a VM and not have any problems for a long time. Whereas with Docker, what's a filesystem? What's a process namespace?
And, well, I wouldn't do anything in a container that I couldn't afford to leak out into my host system. In a chroot especially, the files are just sitting there, waiting for a newbie to touch them from the wrong context.
Exactly! The key thing to realize here is that tutorials have to be aimed at newbies. If they know containers, great. If they don't, and later want to learn, writing and running a Dockerfile based on instructions for the cool thing they successfully ran once saying "Here's our VM, here are the commands we run, and here's the program running in action" is a great little practice problem.
Docker is a lot better on Linux, everywhere else you must also spin a VM of sorts. It makes a difference if you’re running multiple instances since you only pay for the virtualized kernel once.
Vagrant is also great for filing bug reports in software. It allows you to give the developers all the commands to reproduce the bug in a clean environment.
I swear that there should be a tag called #PeopleWhoDontKnowAboutNix
Every single developer on my team has exactly the same environment thanks to a single flake.nix file that I wrote. (And it wasn't that hard. Certainly easier than wasting time troubleshooting individual dev's configuration differences. And certainly less frustrating than Docker.)
I like this process much better than using pre-compiled virtual machine images that let's just say certain State controlled businesses are offering up as an easier way to install dependencies and or tool – chains needed for their "Open source" hardware.
He doesn't want me to have to go figure out what Docker is, but now instead I have to figure out what Vagrant is or does, he doesn't say. I go to the Vagrant site and see:
"Single workflow to build and manage virtual machine environments"
If I was new enough to not know what Docker is, I am pretty sure I would be a little confused about that. I don't want a "workflow" do I, I thought I was trying to follow a tutorial. Do I need to "manage virtual machine environments?". Do I need to install both Virtual Box and this Vagrant tool?
I think this is a nice idea and useful, but I wouldn't couch it in terms of being some great technique for beginner tutorials.
The unreasonable obnoxiousness of the overuse of the "unreasonable effectiveness" meme: I know I should let this bother me, but the "unreasonable effectiveness" of math in the natural sciences was not merely an assertion that math is useful in science, but an observation that math is so useful that it must hint at some underlying metaphysical or philosophical truth about the very nature of reality. I don't think the usefulness of VMs in pedagogy is hinting at anything so profound. They're useful in a reasonable way.
You probably don’t need vagrant anymore.. now that you can throw VMs into any OCI compliant registry. Prefer the container way and want multi-arch support? You can then convert the container disks into vms.
> Prefer the container way and want multi-arch support? You can then convert the container disks into vms.
The prior step is a lossy conversion, though; there are things arbitrary VMs can know about/control, that OCI container images cannot. Like UEFI, or host core affinity, or the layout of the guest’s physical memory, or host disk storage formatting, or nonstandard behavior for paravirtualized devices like NICs.
Ideally the conversion from VM to OCI image would have some sort of standard + portable schema for encoding this type of VM “pragma” info as latent metadata — which container runtimes would ignore, but which a back-conversion into a VM could pick up and translate into hypervisor-specific VM config.
I think, if that was done, then OCI images would indeed become the only portable format you need for any type of container or VM.
Vagrant/VirtualBox was a heavy heavy part of tech support workflows 5-6 years ago, since it gave you a way to effectively script consistent, quasi-reproducible scenarios _including reproductions of wacky customer network and storage topologies_.
That's not as important anymore, I think, since wacky network/storage topoligies are now mostly wacky k8s topologies in stuff like helm charts. But I imagine that old use case isn't completely gone.
VM centric development seems so wasteful to me especially doing it on your hacker home lab. Say you have six cores, overprovision 3x, that leaves you with 18 vms each running One Thing in the vm ethos. Meanwhile, open a pc or a mac and see how many services are running on the bare metal, its probably over 100 and not even impacting you. The author wants you to run parallels to run vms on macs… its madness
To be clear, I am advocating for using disposable, blank slate VMs in tutorials, so a student can walk through the exact steps they would need to get the thing built, installed, and running, with a high chance of success. Nothing more, nothing less.
"effectiveness of VMs" is probably not what really mattered, but rather "disposable environments".
A VM can be a type of re-creatable and disposable environment, if you also include an OS and configuration. But if all you need is a CLI context, the same could be said for chroot, LXC, Docker etc. A VM can be an easier mental model or box for people, but no all people are the same.
> “But Andrew, Docker is already a thing. It’s been a thing for some time, even.” Correct. But everyone knows what a virtual machine is. Not everyone understands Docker. Do you want to risk sending someone off on a wild goose chase learning what containerization is, when they could be learning your thing instead? Meet your student where they are, not where you wish they were.
Well, docker is basically the same thing without the excess resource usage (wastage in many cases) of a type 2 hypervisor.
Also, had a long-forgotten setup with vagrant that I later came back to and couldn't start the VM after a couple of upgrades to vagrant & its dependencies. So take that as you will.
In my experience Vagrant over-promises and under-delivers. Granted, this was a few years ago.
First, there's the whole "VM" part. You are constantly reminded of the border between your machine and the VM: having to explicitly start/stop the VM, SSH into it, set up port forwarding, figure out a core count & RAM size which is large enough to be useful yet small enough to work on everyone's machine, having to set up rsync wrappers because the NFS host mount is unusably slow...
Second, it doesn't really deal well with change. When you're working in a team, you want everyone to have an identical environment. Vagrant is great for setting it up, but it doesn't really have a solution to changing it. Want to add a package? Nuke the VMs, start from scratch, and hope nobody runs into install issues.
Docker, especially with docker-compose, provides pretty much the same but better. It was a no-no when Vagrant first started due to poor Windows support, but I understand that WSL has significantly changed that. You'd have to pay me to go back to Vagrant.
> having to explicitly start/stop the VM, SSH into it, set up port forwarding, figure out a core count & RAM size which is large enough to be useful yet small enough to work on everyone's machine, having to set up rsync wrappers because the NFS host mount is unusably slow...
All this is still kind of true for Docker on any other platform than Linux. There’s still a VM behind the scenes, and you still have to configure and manage it.
However, I personally do avoid that — when I use Docker “on” macOS, I skip the hassle of actually running the Docker daemon locally, by instead having a little headless Linux mini-PC that exposes the Docker control socket over Tailscale to me, and configuring my local Docker client to connect to it. This not only gets `docker run` streaming to a local PTY, but even streams input files for `docker build` to the Docker daemon, entirely transparently. It’s really nice, as I never have to worry about whether Docker Desktop is running, has enough memory, is taking up my local disk space, etc.
Surely there’s a way to do something like this but for vagrant? A Vagrant “thin client” that runs all the VMs on an IaaS service or in-office hypervisor cluster?
> Vagrant is great for setting it up, but it doesn't really have a solution to changing it. Want to add a package? Nuke the VMs, start from scratch, and hope nobody runs into install issues.
Option 1 (centrally managed Vagrant VMs via MDM): Keep the VM’s rootfs volume separate from its /home + /var volume, and push updates to the rootfs volumes via MDM (ideally using a binary diff system like Courgette.)
Option 2: build a chroot under /opt with versions of everything you care about, using e.g. Nix. Then create an apt/RPM package that in turn uses chef/puppet/etc to update this chroot within the VMs. (This is basically what Gitlab does for their self-hosted installable.)
Still not as good as just pulling a new Docker image… but possible.
> First, there's the whole "VM" part. You are constantly reminded of the border between your machine and the VM: having to explicitly start/stop the VM, SSH into it, set up port forwarding, figure out a core count & RAM size which is large enough to be useful yet small enough to work on everyone's machine, having to set up rsync wrappers because the NFS host mount is unusably slow...
What are you talking about? I used to do all that you're mentioning in the Vagrantfile, via code. Granted, ruby code (which doesn't help), but still in code.
> Second, it doesn't really deal well with change. When you're working in a team, you want everyone to have an identical environment. Vagrant is great for setting it up, but it doesn't really have a solution to changing it. Want to add a package? Nuke the VMs, start from scratch, and hope nobody runs into install issues.
Uh, adding the vagrant file to the code repository, maybe? So that maybe you can decide if a package goes into the vm via a code review? Maybe using the provisioning methods, so that changes can both be reviewed and git-tracked.
Like, have you actually read the Vagrant's documentation ? Or are you one of those people that criticize stuff just because the getting started one-page tutorial doesn't solve 100% of their use-case ?
I've used Vagrant in a team in the past, and it feels like either you have no idea what you're talking about or you have used it only briefly without actually reading any documentation.
> Docker, especially with docker-compose, provides pretty much the same but better.
There are some things docker cannot do, and some things vagrant cannot do.
They're completely different pieces of technology that happen to be able to do similar things (and happen to be used for similar purposes).