In previous blogs I’ve mentioned how much we love Ansible here at Isos. One of the ways we develop and use our internal roles is via a Vagrant Multi-Machine setup. This multi-machine setup allows us to spin up multiple Atlassian applications locally, only bound by the resources on our Macbook Pros.

The need to run more than one application at a time was the primary driver behind developing this solution. One of the first problems we ran into when developing this solution was not being able to assign the HTTP and HTTPS ports to multiple VMs. The multi-machine setup has its own private network that Vagrant manages, so we put a small NGINX proxy in front of the applications (normally these live on the app VMs only) to handle all incoming HTTP/HTTPS traffic.

VagrantFile Breakdown:

Here are some important parts of the Vagrantfile:

config.vagrant.plugins = ["vagrant-hostmanager"]
 
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.manage_guest = true
config.hostmanager.include_offline = true

The address resolution is managed via the hostmanager plugin for Vagrant. This plugin handles both the MacOS host entries and entries on each individual VM .This section sets the options for the plugin. We want the hosts managed on both the host and the guests so that we can access the applications in a browser using the Atlassian base URL, as well as the being able to set Application Links between the applications.

config.vm.define "vagrant-proxy", primary: true do |proxy|
  proxy.vm.box = "centos/7"
  # proxy.vm.box = "ubuntu/xenial64"
  proxy.vm.hostname = "vagrant-proxy"
  proxy.vm.define "vagrant-proxy-machine"
  proxy.vm.network :private_network, ip: "192.168.33.9"
  proxy.vm.network :forwarded_port, guest: 80, host: 80
  proxy.vm.network :forwarded_port, guest: 443, host: 443
  proxy.vm.network :forwarded_port, guest: 22, host: 10022, id: "ssh"
  proxy.hostmanager.aliases = %w(issues.isos.local wiki.isos.local git.isos.local fecru.isos.local crowd.isos.local c2c.isos.local)
  proxy.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "512"]
  end
  proxy.vm.synced_folder '.', '/vagrant', disabled: true . ## The proxy doesn't need any of the artifacts in our Vagrantfile directory so we disable syncing to speed up the process.
  proxy.vm.provision "ansible" do |ansible|
    ansible.playbook = "vagrant-proxy.yml"
    # ansible.verbose = "vvvv"
    ansible.galaxy_role_file = "requirements.yml"
    ansible.extra_vars = "vagrant.yml"
    ansible.vault_password_file = ".vault_pass"
    # ansible.tags = "debug"
    ansible.groups = {
      "vagrant-proxy" => ["vagrant-proxy-machine"],
    }
  end
end

In this section we are creating the vagrant-proxy that allows for all applications to be accessed via HTTP/HTTPS. There are some important parts of this that must be set uniquely for every VM.

proxy.vm.network :private_network, ip: "192.168.33.9" – Each VM needs a unique IP address that the NGINX proxy can access. We hardcode these so that the Ansible role that sets up the NGINX proxy can also be hardcoded. On our roadmap is to handoff the proxy configuration file creation to each individual application box and then be able to use the Ansible Host Facts to get these IPs.

proxy.hostmanager.aliases = %w(issues.isos.local wiki.isos.local git.isos.local fecru.isos.local crowd.isos.local c2c.isos.local) – This is where we set the local addresses the applications will be accessed from by the host system and within each VM. The aliases feature of the hostmanager is where we assign the addresses for our host entries.

Within the Ansible block, there some other interesting bits:

ansible.playbook = "vagrant-proxy.yml" –  We use a unique playbook for each VM. This allows us to put application specific tasks in the playbook to be executed along with the roles.

ansible.galaxy_role_file = “requirements.yml” – We use a global requirements file in order to download necessary roles. You could change this to application specific requirements based on your role structure. We use this requirements file to bring in difference branches or repositories for roles using version:.

ansible.extra_vars = "vagrant.yml" – This is a really important piece of the solution for us. We develop our roles with defaults that allow the role to be deployed with no additional configuration. When we’re using this local development tooling to test new features, we use the vagrant.yml file to override variables for our test environment. These extra vars always win precedence, so it’s a great way to override them.

Full Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 2.2.0"
 
# Temporary workaround to Python bug in macOS High Sierra which can break Ansible
# https://github.com/ansible/ansible/issues/34056#issuecomment-352862252
# This is an ugly hack tightly bound to the internals of Vagrant.Util.Subprocess, specifically
# the jailbreak method, self-described as "quite possibly, the saddest function in all of Vagrant."
# That in turn makes this assignment the saddest line in all of Vagrantfiles.
ENV["VAGRANT_OLD_ENV_OBJC_DISABLE_INITIALIZE_FORK_SAFETY"] = "YES"
 
Vagrant.configure("2") do |config|
 
  config.vagrant.plugins = ["vagrant-hostmanager"]
 
  config.hostmanager.enabled = true
  config.hostmanager.manage_host = true
  config.hostmanager.manage_guest = true
  config.hostmanager.include_offline = true
 
  config.vm.define "vagrant-proxy", primary: true do |proxy|
    proxy.vm.box = "centos/7"
    # proxy.vm.box = "ubuntu/xenial64"
    proxy.vm.hostname = "vagrant-proxy"
    proxy.vm.define "vagrant-proxy-machine"
    proxy.vm.network :private_network, ip: "192.168.33.9"
    proxy.vm.network :forwarded_port, guest: 80, host: 80
    proxy.vm.network :forwarded_port, guest: 443, host: 443
    proxy.vm.network :forwarded_port, guest: 22, host: 10022, id: "ssh"
    proxy.hostmanager.aliases = %w(issues.isos.local wiki.isos.local git.isos.local fecru.isos.local crowd.isos.local c2c.isos.local)
    proxy.vm.provider :virtualbox do |vb|
      vb.customize ["modifyvm", :id, "--memory", "512"]
    end
    proxy.vm.synced_folder '.', '/vagrant', disabled: true
    proxy.vm.provision "ansible" do |ansible|
      ansible.playbook = "vagrant-proxy.yml"
      # ansible.verbose = "vvvv"
      ansible.galaxy_role_file = "requirements.yml"
      ansible.extra_vars = "vagrant.yml"
      ansible.vault_password_file = ".vault_pass"
      # ansible.tags = "debug"
      ansible.groups = {
        "vagrant-proxy" => ["vagrant-proxy-machine"],
      }
    end
  end
 
  config.vm.define "jira", autostart: false do |jira|
    # jira.vm.box = "ubuntu/xenial64"
    jira.vm.box = "centos/7"
    # jira.vm.box = "ubuntu/xenial64"
    jira.vm.hostname = "jira"
    jira.vm.define "jira-machine"
    jira.vm.network :private_network, ip: "192.168.33.10"
    jira.vm.network :forwarded_port, guest: 8080, host: 8080
    jira.vm.network :forwarded_port, guest: 9080, host: 9080
    jira.vm.network :forwarded_port, guest: 22, host: 10122, id: "ssh"
    jira.vm.provider :virtualbox do |vb|
      vb.customize ["modifyvm", :id, "--memory", "4096"]
    end
    jira.vm.provision "ansible" do |ansible|
      ansible.playbook = "jira.yml"
      # ansible.verbose = "vvvv"
      ansible.galaxy_role_file = "requirements.yml"
      ansible.extra_vars = "vagrant.yml"
      ansible.vault_password_file = ".vault_pass"
      # ansible.skip_tags = "config"
      # ansible.tags = "deploy"
      ansible.groups = {
        "jira" => ["jira-machine"]
      }
    end
  end
 
  config.vm.define "confluence", autostart: false do |confluence|
    confluence.vm.box = "centos/7"
    # confluence.vm.box = "ubuntu/xenial64"
    confluence.vm.hostname = "confluence"
    confluence.vm.define "confluence-machine"
    confluence.vm.network :private_network, ip: "192.168.33.11"
    confluence.vm.network :forwarded_port, guest: 8090, host: 8090
    confluence.vm.network :forwarded_port, guest: 9090, host: 9090
    confluence.vm.network :forwarded_port, guest: 22, host: 10222, id: "ssh"
    confluence.vm.provider :virtualbox do |vb|
      vb.customize ["modifyvm", :id, "--memory", "4096"]
    end
    confluence.vm.provision "ansible" do |ansible|
      ansible.playbook = "confluence.yml"
      # ansible.verbose = "vvvv"
      ansible.galaxy_role_file = "requirements.yml"
      ansible.extra_vars = "vagrant.yml"
      ansible.vault_password_file = ".vault_pass"
      # ansible.skip_tags = "config"
      # ansible.tags = "debug"
      ansible.groups = {
        "confluence" => ["confluence-machine"]
      }
    end
  end
 
  config.vm.define "bitbucket", autostart: false do |bitbucket|
    bitbucket.vm.box = "centos/7"
    # bitbucket.vm.box = "ubuntu/xenial64"
    bitbucket.vm.hostname = "bitbucket"
    bitbucket.vm.define "bitbucket-machine"
    bitbucket.vm.network :private_network, ip: "192.168.33.12"
    bitbucket.vm.network :forwarded_port, guest: 7990, host: 7990
    bitbucket.vm.network :forwarded_port, guest: 7999, host: 7999
    bitbucket.vm.network :forwarded_port, guest: 22, host: 10322, id: "ssh"
    bitbucket.hostmanager.aliases = %w(git-ssh.isos.local)
    bitbucket.vm.provider :virtualbox do |vb|
      vb.customize ["modifyvm", :id, "--memory", "4096"]
    end
    bitbucket.vm.provision "ansible" do |ansible|
      ansible.playbook = "bitbucket.yml"
      # ansible.verbose = "vvvv"
      ansible.galaxy_role_file = "requirements.yml"
      ansible.extra_vars = "vagrant.yml"
      ansible.vault_password_file = ".vault_pass"
      # ansible.skip_tags = "config"
      # ansible.tags = "debug"
      ansible.groups = {
        "bitbucket" => ["bitbucket-machine"]
      }
    end
  end
 
end

You’ll notice throughout the playbook common commented out lines. We write our roles to be deployed with a variety of operating systems.

# bitbucket.vm.box = "ubuntu/xenial64" – We can comment out the CentOS Vagrant base box and use Ubuntu for our local environment.

# ansible.verbose = "vvvv" – When things are going wrong we want more output from out Ansible, but we don’t want it all the time.

# ansible.skip_tags = "config" and # ansible.tags = "debug" – These do the opposite things, but both allow us to limit the Ansible execution based on certain tags. We use this primarily for debugging role execution.