Correct. Like developing to a common deployment target, provisioning a development VM with the same toolchain you use to provision production VMs, isolating your VM from the system state of your development machine. I cannot imagine developing any other way.
I did this for a while, but it falls apart with constant care and feeding. I had a vagrant box that could be fully provisioned with Ansible, and checked everything in to the repo. I paid for Parallels because virtualbox is too slow to do any real work (unit tests take 2x or 3x longer to run for example).
I couldn't convince anyone else to use this stuff. They were happy to just run some homebrew commands every now and then.
I was away from the project for about six months. When I got back I thought "great, I'll just `vagrant up` and be back in business."
Parallels had upgraded itself to some version that wasn't compatible with vagrant. I was eventually able to find an old parallels installer and downgraded.
My laptop had a newer version of ansible, and it couldn't run the old ansible scripts. There was no easy fix to something that should be easy (ansible could no longer create postgres users when ansible was running through a non-root account. With vagrant you log in as the `vagrant` user and that user has sudo).
I deleted all that stuff, and now I do all my development locally. I do run my DB's in docker containers, but that's it. Now it is super easy to add any version of any database to any project. Just a few lines in docker-compose.yml. But my docker-compose.yml was written for an older version of docker-compose. I tried to upgrade it to the latest syntax and nothing worked. I reverted that and things are still running. It is only a matter of time before some docker update renders my db and redis useless.
At that point I'll just `brew install redis` and `brew install postgres` and be done with it. Everything will run, and run at native speeds. Yay!
I think this is overstated. This only falls apart if you cannot get buy-in from your development team, and are not applying proper version management to your tool chain.
First, we use Ansible, not just for development, but for deployment. Because of this, if the time comes to upgrade Ansible, everybody upgrades, and any incompatible playbooks are immediately addressed.
Second, Parallels is a bad choice. They do not have a sufficient enterprise focus, and their product is updated too often. It's great for desktop virtualization, but not for much else.
We use VMWare Fusion, have no performance issues, and have never experienced the issue of hypervisor bit rot. On the one occasion when we upgraded VMWare, we also upgraded the corresponding Vagrant plugin, and everything worked fine.
I guess it depends on your use case but just about everyone I know uses Vagrant. It's true that you should never upgrade VirtualBox or Vagrant anywhere near a deadline, but I rarely have problems. I get similar performance from both VirtualBox and Parallels, both on network shares and execution. I'm on a 2014 MacBook Pro, Ubuntu guests, mostly doing PHP and Node.
That's interesting that you get the same performance on vbox and parallels. I tested vbox, vmware, and parallels on a macbook pro (2011 or 2012). vbox was an order of magnitude slower running our unit tests (and rails tests are slow enough as it is). Parallels was the fastest running the tests at close to native speed. Vmware was okay, but not great.
I was using NFS - the default VirtualBox Shared Folders is slow as to be useless (which I guess is what you were going to say). But on NFS (which is recommended by Vagrant) I had no problems.