I’ve been working with Ansible for the last several days. I’ve been testing my Ansible scripts on DigitalOcean. First, I’d create a fresh $5 droplet, run my script a couple of time against this server and soon after the tests were finished, I’d destroy the droplet. But it wouldn’t end with that, I’d often want to make a quick change in the script and test this change, but the environment is already different and I can’t be sure that my script will work the same on the fresh environment, so I would go and create another droplet and repeat the process from the beginning. Though, it’s not very difficult or time consuming, I still can optimize the process by saving the time I spend on waiting for droplet provision & removal and by saving my money: each time I spin up a new droplet I pay at least for 1 hour of time + I often forget to remove the droplet immediately which means that I have to pay for that time too.

So, my only requirement was to be able to recreate a fresh ubuntu image within a minute, and ideally this workflow should be scripted. I knew that there is VirtualBox and I’ve used it before, but using a GUI is not the best option, as I wanted to be able to script the workflow. There is a VBoxManage utility though, and one could use it to automate the workflow, but there is still a large amount of work to do compared to other options. I knew about Vagrant too, I’ve used it extensively for local LAMP stack + as a shared dev environment on some projects. So, after a quick research and testing a few hypotheses I settled with Vagrant.

I don’t know how Vagrant works under the hood, so, at first, I was afraid that my Ansible script would change the base Ubuntu image locally, which means that after I spin up a new virtual machine it won’t start from a fresh image, but instead start with the same state that it was after my Ansible scripts were tested. One could still duplicate a base image for each fresh virtual machine, but that means more hassle with scripting. Anyway, it turned out that it is pretty easy to provision fresh boxes with Vagrant, so I decided to use it eventually.

In order to provision a fresh Ubuntu box with Vagrant, I need to run the following command:

vagrant init ubuntu/trusty64

This will create a Vagrantfile in the current directory, which contains a basic configuration for our box. In order to boot the newly created box, I need to run this:

vagrant up

This command will download the box (if we don’t have a copy locally), perform the necessary configurations and start the machine. After the machine is started it can be accessed via SSH on 2222 port (you can change the default port in Vagrantfile):

vagrant ssh
# or
ssh vagrant@localhost -p 2222

After the work with the machine is finished and I want to rebuild the machine from scratch I just need to run these 2 commands:

vagrant destroy
vagrant init ubuntu/trusty64

OK, the virtual machine problem is almost solved. Initially, you can’t login via SSH with root (actually, you can, but you need to use the generated private key, which is not very easy to get right). Upon firing vagrant ssh command Vagrant will try to connect to a box with vagrant user, but in order to match the behavior of DigitalOcean droplets I need to be able to login as a root. Vagrant boxes usually have 2 default users: root and vagrant. Both have vagrant as a password. You can login with vagrant user, but you won’t be able to login with root user. This is an expected behavior and in order to change that we have to change the SSH daemon settings. We also need to make this change each time we provision a machine. That’s why I’m going to write a simple script and put it inside a Vagrantfile provision section. Here’s the code:

config.vm.provision "shell", inline: <<-SHELL
	sudo sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
	sudo service ssh reload
SHELL

Basically, this script replaces a line in the SSH daemon configuration file, allowing the root user to login with password, then it reloads the SSH configuration to apply the changes. After the box is up and running we can login with root as a username and vagrant as a password. Now, when we replicated a DigitalOcean droplet, we can start to test Ansible scripts on those machines.

There is still one problem, though: each time we ssh into a box, our client adds the host fingerprint to a ~/.ssh/known_hosts file. And each time we destroy and spin up a new box the fingerprint changes, so next time we try to login, it exits with an error about fingerprint mismatch. We are going to solve this problem by instructing Ansible to skip host checking. We can do that by setting the following environment variable:

export ANSIBLE_HOST_KEY_CHECKING=false

But this option will be lost after we exit from our current shell session. So, it would be more reliable to put that in a config file, which we are going to create in a directory where our Vagrantfile lies.

# this will create an ansible config file which will
# instruct Ansible to ignore host key checking
echo "[defaults]\nhost_key_checking = False" > ansible.cfg

After the config file is created we can run a test command against our machine (it will prompt for a password, which is vagrant by default):

ansible all -i 'localhost,' -e 'ansible_port=2222' -u root -k -m ping

We are telling Ansible that it should:

  • run this command against all machines in an inventory file, all
  • use an ad-hoc inventory file (because it is easier at this point), -i
  • use a 2222 port, -e
  • login as a root user, -u
  • ask for a password, -k
  • invoke a ping module, -m

The ping module is a sort of a hello world program which just tests if we can connect to the machine. The command we’ve just ran is pretty cumbersome, we can simplify it a bit by moving the config options form the command line to config files. First, we are going to add our Vagrant machine to a global inventory file:

echo "localhost ansible_port=2222 ansible_user=root" >> /etc/ansible/hosts

Then we instruct Ansible to prompt for the password on each invocation:

echo "ask_pass = True" >> ansible.cfg

Finally, we can run a short version of the command and accomplish the same result:

ansible all -m ping

After performing our tests on the machine we can rebuild it with this one line command:

vagrant destroy -f && vagrant up

If you don’t want to clutter your terminal screen with Vagrant messages you can redirect the standard output and standard error:

vagrant destroy -f > /dev/null 2>&1 && vagrant up > /dev/null 2>&1

In order to shorten this command, we can move it to a file:

echo "vagrant destroy -f > /dev/null 2>&1 && vagrant up > /dev/null 2>&1\necho vm is ready to use" > vm.sh
sudo chmod u+x vm.sh # set executable permissions on the file

And, finally, we can rebuild our virtual machine from scratch with this simple command:

./vm.sh