We’ve been on-boarding more Technical Consultants lately and we’ve been improving how we introduce them to our technology stack. We had a need to have a learning environment for two technologies:

  • Ansible – used on more and more of our technical engagements. We use this to deploy the Atlassian applications and more.
  • Click2Clone – we deploy this on many customer engagements as both a migration utility and to restore data from production applications to lower tiers

We wanted new people to be able to test both of these tools end-to-end in an environment that had enough resources to provide “real world experience”. This seemed easy enough at first. I thought we’ll just spin up instances in AWS and hand over the keys. But there was a need to isolate this learning environment from everything else we are doing. We also wanted to add some extra challenges that needed to be figured out along the way. With new requirements I decided we needed something more repeatable that would also minimize the amount of time the facilitator had to take away from customer work in order to stand up.

Enter Ansible….again.

Using an Ansible playbook and CloudFormation I was able to automate everything from the EC2 Key Pair creation, standing up of a VPC with both public and private subnets, then standing up three EC2 Instances (Jira Prod, Jira Dev and a Click2Clone instance) and two PostgreSQL RDS instances. Lets break this down:

Playbook Options

This section tells Ansible to execute the playbook locally (since we don’t yet have any remote instances.

---
- hosts: "localhost"
  connection: "local"
  gather_facts: false

 

Variables

vars:
  route53HostedZone: {{ Hosted zone where you can create DNS entries for each of the applications in our test environment }}
  base_ami_id: {{ Whichever flavor of AMI you want to spin up for your test environment }}
  rds_master_user_prod: {{ Admin username for your Prod RDS instance }}
  rds_master_password_prod: {{ Admin password for your Prod RDS instance (this is usually stored in a Ansible Vault inline or file) }}
  rds_master_user_dev: {{ Admin username for your Dev RDS instance }}
  rds_master_password_dev: {{ Admin password for your Dev RDS instance (this is usually stored in a Ansible Vault inline or file) }}
  elb_ssl_cert_arn: {{ Wildcard certificate in AWS Certificate Manager (for the above hosted zone) to be applied to the Load Balancers }}
  aws_profile: {{ Named AWS profile (see https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) with permissions to create the included infrastructure }}
  aws_region: {{ Whichever AWS Region you want to spin the infrastructure up in }}
  aws_vpc_stackName: {{ Unique prefix name for your CloudFormation Stacks }}
  aws_key_name: {{ Unique name for your EC2 Key Pair }}

Tasks

This section is the meat and potatoes of this automation. It does the following:

tasks:
  - name: Check if AWS Key Pair Private Key exists
    stat:
      path: "{{ aws_key_name }}.pem"
    register: aws_key_exists
 
  - name: Create AWS Key Pair
    block:
      - ec2_key:
          name: "{{ aws_key_name }}"
          profile: "{{ aws_profile }}"
          region: "{{ aws_region }}"
          key_material: "{{ aws_public_key | default(omit)}}"
          wait: true
        register: aws_key_pair
        failed_when: false
 
      - copy:
          dest: "{{ aws_key_name }}.pem"
          content: "{{ aws_key_pair.key.private_key }}"
 
      - file:
          path: "{{ aws_key_name }}.pem"
          mode: "0600"
    when: aws_key_exists.stat.exists == False
 
  - name: Create VPC for Exercise via CloudFormation
    cloudformation:
      profile: "{{ aws_profile }}"
      stack_name: "{{ aws_vpc_stackName }}-vpc"
      region: "{{ aws_region }}"
      state: "present"
      template: "CloudFormation/VPC_With_Managed_NAT_And_Four_Subnets.yml"
      template_parameters:
        KeyName: "{{ aws_key_name }}"
        SshLocation: "0.0.0.0/0"
        BaseImageId: "{{ base_ami_id }}"
        Route53HostedZone: "{{ route53HostedZone }}."
        Route53BastionSubdomain: "{{ aws_vpc_stackName }}-bastion"
      tags:
        App: "click2clone-exercise"
    register: vpc_stack
 
  - name: Create Infrastructure for Exercise via CloudFormation
    cloudformation:
      profile: "{{ aws_profile }}"
      stack_name: "{{ aws_vpc_stackName }}-instances"
      region: "{{ aws_region }}"
      state: "present"
      template: "CloudFormation/Exercise.yml"
      template_parameters:
        KeyName: "{{ aws_key_name }}"
        BaseImageId: "{{ base_ami_id }}"
        VpcId: "{{ vpc_stack.stack_outputs.AtlassianVpcId }}"
        PrivateSubnet1: "{{ vpc_stack.stack_outputs.AtlassianPrivateSubnet1 }}"
        PrivateSubnet2: "{{ vpc_stack.stack_outputs.AtlassianPrivateSubnet2 }}"
        PublicSubnet1: "{{ vpc_stack.stack_outputs.AtlassianPublicSubnet1 }}"
        PublicSubnet2: "{{ vpc_stack.stack_outputs.AtlassianPublicSubnet2 }}"
        BastionSecurityGroup: "{{ vpc_stack.stack_outputs.AtlassianSshBastionSecurityGroup }}"
        RdsMasterUserProd: "{{ rds_master_user_prod }}"
        RdsMasterPasswordProd: "{{ rds_master_password_prod }}"
        RdsMasterUserDev: "{{ rds_master_user_dev }}"
        RdsMasterPasswordDev: "{{ rds_master_password_dev }}"
        Route53HostedZone: "{{ route53HostedZone }}."
        ElbSslCertArn: "{{ elb_ssl_cert_arn }}"
      tags:
        App: "click2clone-exercise"
    register: ec2_stack

After spinning up all the necessary infrastructure I wondered what to do with all the details. Much of it was set as Outputs from the CloudFormation templates, but how do I easily and securely get that information to the new Technical Consultant?

Check out Jaime’s blog post on how we got the infrastructure details along with the pem file into Confluence so that we could give the candidate a single link.