Easy custom Elastic Bamboo agents with Packer

Reading Time: 7 minutes

Bamboo provides the powerful ability to dynamically scale your build farm by launching swarms of build agents on Amazon’s infrastructure. These AWS images are fully customisable, but the process is a bit involved. This post introduces a simpler method of doing this using Packer and Ansible. Read on for the details (and a ready-to-use example repository) …

One of the most powerful features of Elastic Bamboo is the ability to provide custom AWS images for your agents to run on. This allows you to provide additional software with your build and deployment tools already available to your build agents. However, while the process is well documented, it is not necessarily something to be taken on lightly, especially if you’re not familiar with the arcane commands of the AWS suite. Not to mention in these days of configuration management hand-wrangling servers, and especially disposable VMs, is far from best-practice. It’s much better to automate these problems in a repeatable manner. One option is to encode the necessary commands into a script, but these rapidly become unmanageable; this is why tools such as Puppet and Ansible came about in the first place after-all.

Packing Machine

Luckily there is a corresponding tool for generation of virtual-machine and container images from a common configuration base, called Packer. Packer is not an alternative to existing configuration management tools but complements them, and can use them to perform the actual configuration of the images, allowing you to reuse your existing setups.

So with that all said, lets create a new Elastic Bamboo AWS agent image using Packer, and some existing Bamboo agent setup scripts I’ve created previously. In particular, the standard Elastic images provided with Bamboo use an older version of Docker and don’t provide Compose, so let’s fix that up….

As usual, the source for all everything here is available on Bitbucket, in this case in this repository.

Creating our AWS image

Packer uses JSON for its configuration format; for now we’ll now just create a file called bamboo-image.json; this will contain our initial configuration. I’ll introduce the individual sections of the configuration and then pull it all together further down.

As we’re working with AWS we’ll need to provide an ID and key; obviously we don’t want to put those into our configuration as we’ll be checking it into Git. To avoid this Packer allows you to provide additional variables at run-time via the environment, command-line or separate file. The version in my repository uses an external file, but for simplicity we’ll assume you’ve put the information into the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We then pull these into the configuration using a variables block like this:


"variables": {

    "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",

    "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"

}

With that in place we can start creating our image. Packer provides a number of image builders for AWS images, including one that can provision into a chroot. However, for simplicity the EBS-based builder is the best for our case.

It’s possible to produce a Bamboo AWS image from scratch, but as we already have a working image provided it’s easier when starting out to base our new version on that. To do that we need to lookup the ID of the existing Bamboo AMI; to do that go into your Bamboo server administration page and select Image Configurations. This will list all the pre-supplied images available. We’ll use the Ubuntu one, which has the ID ami-1c247d74, which we put into the builders section, along with some other relevant information:


"builders": [{

    "type": "amazon-ebs",

    "access_key": "{{user `aws_access_key`}}",

    "secret_key": "{{user `aws_secret_key`}}",

    "region": "us-east-1",

    "source_ami": "ami-1c247d74",

    "instance_type": "m1.medium",

    "ssh_username": "ubuntu",

    "ami_name": "bamboo-ami {{timestamp}}"

}]

While this is sufficient, it’s usually a good idea to supply some additional tags, which allow auditing and accounting of your images, so let’s add that too:


"tags": {

    "Name": "Stock Bamboo Ubuntu with updated Docker/Compose",

    "resource_owner": "ssmith",

    "service_tier": "app",

    "environment": "prod"

}

Putting that together, the Packer configuration now looks like this:


{

    "variables": {

        "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",

        "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"

    },

    "builders": [{

        "type": "amazon-ebs",

        "access_key": "{{user `aws_access_key`}}",

        "secret_key": "{{user `aws_secret_key`}}",

        "region": "us-east-1",

        "source_ami": "ami-1c247d74",

        "instance_type": "m1.medium",

        "ssh_username": "ubuntu",

        "ami_name": "bamboo-ami {{timestamp}}",

        "tags": {

            "Name": "Stock Bamboo Ubuntu with updated Docker/Compose",

            "resource_owner": "ssmith",

            "service_tier": "app",

            "environment": "prod"

        }

    }]

}

This is actually a complete Packer configuration; if you want to test it you can save it to the file bamboo-image.json and run the following commands:


$ export AWS_ACCESS_KEY_ID="MY-AWS-ID"

$ export AWS_SECRET_ACCESS_KEY="MY-AWS-KEY"

$ packer build bamboo-image.json

Note: this will use-up some runtime on the AWS cloud and may incur some fees, although they should be minimal. At the end of the Packer run you should see a like like:


    amazon-ebs: AMI: ami-d3adb33f

This is the ID of the new image that was created.

Updating the image contents

The benefit of producing our own images comes from being able to provide custom software to our build agents. But the image we created above just copies the existing AMI, so is functionally equivalent. To modify the image before we save it we need to provide a provisioners section to our configuration. Provisioners are how Packer configures the image prior to dumping it to an AMI, and is where it integrates with existing configuration management tools.

Packer supplies integrations with all the current provisioning tools such as Chef, Puppet and Ansible (and as it supports plain shell-scripts, potentially any other tool can be used with a little work). We’ll be using two of those here, shell and ansible.

Getting Ansible onto our image

We want to use Ansible for our provisioning, as I already have a bunch of playbooks and roles available for managing Docker and Bamboo agents. The Packer Ansible provisioner relies on Ansible being available on the image before being run; this is a bit different from the usual Ansible system of connecting to the host via SSH. However, as we also use Ansible for our continuous deployment workflow we’ll like it to be on our images by default anyway.

But this creates a chicken-and-egg situation, as we can’t deploy Ansible with Ansible; we need to get it on there first. This is where the all-purpose shell provisioner comes in. shell will run any arbitrary script on the image while it is being built, so we install Ansible with that. This script looks like:


#!/bin/bash -eux

sudo apt-add-repository -y ppa:ansible/ansible

sudo apt-get -y update

sudo apt-get -y install ansible

We just save this to a file and then get Packer to use it via the provisioners section:


"provisioners": [

    {

        "type": "shell", 

        "scripts": [

            "scripts/ansible.sh"

        ]

    }

]

Updating Docker with Ansible

With Ansible on our image we can now get down to the real work of updating Docker and adding Compose. Going into the details of Ansible playbooks and roles is beyond the scope of this post, but the full versions Ansible files are available in the repository. The main files of interest are bamboo-docker-update.yml (the parent playbook) and roles/docker/tasks/main.yml (the role that configures Docker). These are YAML files which look like:

Playbook

This merely invokes the appropriate Ansible roles:


- hosts: all

  sudo: true

  vars:

    - docker_users:

        - bamboo

  roles:

    - ubuntu-common

    - docker

Role tasks

The role uses the official Docker Ubuntu repository to upgrade to the latest, then downloads Compose and places it under /usr/local/bin/:


- name: Add Docker repo key

  apt_key: keyserver=keyserver.ubuntu.com id=36A1D7869245C8950F966E92D8576A8BA88D21E9

- name: Add Docker repo

  apt_repository: repo='deb https://get.docker.com/ubuntu docker main' state=present

- name: Install Docker

  apt: pkg=lxc-docker state=installed

- name: Add users to docker group

  user: name={{item}} groups=docker append=yes

  with_items: docker_users

- name: Install Compose

  get_url: url="https://github.com/docker/compose/releases/download/{{compose_version}}/docker-compose-{{ansible_system}}-{{ansible_machine}}" dest=/usr/local/bin/docker-compose

- name: Set Compose permissions

  file: path=/usr/local/bin/docker-compose mode=a+x

With the Ansible configuration in place we can now add it to our provisioners section:


"provisioners": [

    {

        "type": "shell", 

        "scripts": [

            "scripts/ansible.sh"

        ]

    },

    {

        "type": "ansible-local",

        "playbook_dir": ".",

        "playbook_file": "bamboo-docker-update.yml"

    }

]

Pulling it all together

Putting all this together, the final Packer configuration looks like this:


{

    "variables": {

        "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",

        "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"

    },

    "builders": [{

        "type": "amazon-ebs",

        "access_key": "{{user `aws_access_key`}}",

        "secret_key": "{{user `aws_secret_key`}}",

        "region": "us-east-1",

        "source_ami": "ami-1c247d74",

        "instance_type": "m1.medium",

        "ssh_username": "ubuntu",

        "ami_name": "bamboo-ami {{timestamp}}",

        "tags": {

            "Name": "Stock Bamboo Ubuntu with updated Docker/Compose",

            "resource_owner": "ssmith",

            "service_tier": "app",

            "environment": "prod"

        }

    }],

    "provisioners": [

        {

            "type": "shell", 

            "scripts": [

                "scripts/ansible.sh"

            ]

        },

        {

            "type": "ansible-local",

            "playbook_dir": ".",

            "playbook_file": "bamboo-docker-update.yml"

        }

    ]

}

This is in the file bamboo-docker-update.json. This can be invoked with:


$ export AWS_ACCESS_KEY_ID="MY-AWS-ID"

$ export AWS_SECRET_ACCESS_KEY="MY-AWS-KEY"

$ packer build bamboo-docker-update.json

Packer will go off for a while creating and modifying our image, then in the final step dump the running VM to an AMI and tell us the ID; the last line of the Pack run looks something like this:


us-east-1: ami-d3adb33f

That (dummy, in this case) ID is what we need to do the next step; telling Bamboo about this AMI.

Adding images to Bamboo

Adding new AMIs to Bamboo is pretty straight-forward. Inside your Bamboo administration screen select Image Configuration; at the bottom of the list of existing images is a form that allows you to enter the information. Just fill this in, including the AMI ID that Packer gave us:

Adding an AMI to Bamboo

Once this is added, it will be available to be used by Bamboo plans. However, we probably want to prefer to use this image over the others, as it has a newer Docker and Compose installed. To do this, we need to update its capabilities. Capabilities are tags against Bamboo images that specify what they have available; usually installed software.

To flag our unique AMI, click on Capabilities next to our newly created image; down the bottom is a form to add new ones. For the Docker version we’ll add a new one called docker.version with a value of 1.7:

Adding a Docker version capability

To flag that we have Compose installed, we’ll add a new executable capability called Docker Compose:

Adding a Compose capability

That’s it, now we can start using our new Elastic Bamboo image by depending on it’s unique capabilities in our Bamboo jobs…

Using Elastic Bamboo image

To ensure our jobs run on our new image we just make its capabilities a requirement in our Bamboo build plan. To do this, edit the job configuration and select the Requirements tab. Then add our new capabilities:

Adding a requirements

From now on whenever this job is run Bamboo will automatically start one of this instances if necessary and run our job on it.

More information

If you want more information on Packer then the best place to start is at their home page. For more information on Bamboo and Docker check out our past posts on in our Bamboo and Docker categories.

To keep up to date on what we’re up to follow the Atlassian Dev Twitter feed, or my Twitter: @tarkasteve.