Scaling a Simple PHP App With Docker Swarm
Docker Swarm makes it relatively easy to scale apps. With the help of Terraform and Packer, you can set up scaling for an app using cloud-native infrastructure.
Join the DZone community and get the full member experience.
Join For FreeNote: This is based on Docker 1.12 as of the time of writing. Whilst Docker 1.13 is now released, it is not yet in the CoreOS builds. As soon as 1.13 is available, I will append a footnote to this article and edit this note!
As more and more people jump on the Docker bandwagon, more and more people are wondering just exactly how we scale this thing. Some will have heard of Docker-Compose, some will have heard of Docker Swarm, and then there are some folks out there with their Kubernetes and Mesos clusters.
Docker Swarm became native to Docker in v1.12 and makes container orchestration super simple. Not only that, but each node is accessible via the hostname due to the built-in DNS and Service Discovery. With its overlay network and inbuilt routing mesh, all the nodes can accept connections on the published ports for any of the services running in the Swarm. This basically gives you the access to multiple nodes and treat them as one.
Just to top it off, Docker Swarm has built-in load balancing. Send a request to any of the nodes and it will send the request in a round-robin fashion to all the containers running the requested service. Simply amazing, and I’m going to show you how you can get started with this great technology.
For my example, I’ve chosen a PHP application (cue the flames), it’s a great way to show how a real-world app may be scaled.
There are a few parts that I will be covering:
- Creating base images
- Using Docker-Compose in Development
- Creating the infrastructure (Terraform)
- Creating a base image (Packer)
- Deploying
- Scaling
1. Creating Base Images
You may already be familiar with keeping provisioned AMIs/images up in the cloud that contain most of the services you need. That's essentially all a base/foundation image is. The reality is that every time you push your code, you don't want to have to wait for the stock CentOS/Ubuntu image to be re-provisioned. Base images allow you to create a basic setup that you can use not just on one project, but on multiple projects.
What I've done, is created a repository called, Docker Images, which currently has just two services; Nginx and PHP-FPM. Inside it is a little build script which iterates over each container, builds it and then pushes it to Docker Hub.
Your foundation images can contain whatever you want. Mine have some simple configuration such as nginx/php-fpm configuration. I have configured Supervisord to ensure that php-fpm is always running. Additionally, as I am placing both dev and prod versions of php.ini on the container, the Supervisord accepts environment parameters so the container can be fired up in dev mode or production ready.
This is the build.sh script within the Docker Images repo:
Build.sh:
#!/bin/bash
VERSION=1
CONTAINER=$1
BUILD_NUMBER=$2
docker build ./$CONTAINER -t bobbydvo/ukc_$CONTAINER:latest
docker tag bobbydvo/ukc_$CONTAINER:latest bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER
docker push bobbydvo/ukc_$CONTAINER:latest
docker push bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER
A simple Jenkins job with parameterised builds has been configured to pass the correct arguments to the script:
echo $BUILD_NUMBER
docker -v
whoami
sudo docker login -u bobbydvo -p Lr6n9hrGBLNxBm
sudo ./build.sh $CONTAINER $BUILD_NUMBER
Note: You will have to ensure that the Jenkins user is allowed sudo access
You can find the repository here.
Each time the job is run, it will place new versions of each container here.
Some may argue that due to the cache built up in layers within Docker, you can skip the Base Image step. However, I find it to be a great way to keep jobs isolated, with the addition of being able to re-use the containers for other projects. It also gives great visibility when a container-build has failed simply because an external package has been updated, and therefore it will not update your 'latest' tag and won't halt your deployments!
Google has a great guide on building Foundation Images
We now need to test our two images/containers with our PHP app.
2. Using Docker-Compose in Dev
This is my repository with a dummy PHP app.
If you're familiar with PHP, you will notice that this is a Slim 3 application using Composer for dependency management. You'll also find a file, 'docker-compose.yml' - this will coordinate Docker to use both of our containers:
Docker-compose.yml:
version: "2"
services:
php-fpm:
tty: true
build: ./
image: bobbydvo/dummyapp_php-fpm:latest
ports:
- "9000:9000"
environment:
- APPLICATION_ENV=dev
web:
tty: true
image: bobbydvo/ukc_nginx:latest
ports:
- "80:80"
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
The php-fpm container will use the Dockerfile in the root of the application to build the image, copy over the files onto the Docker Image itself, and save the image locally as a new container, rather than the use the base image. As it happens, the Nginx container doesn't need any modification, as it's only the PHP app that will change when we add code. Of course, you can change this to suit your needs if necessary.
Running the application is as simple as typing:
docker-compose up
You can now head over to http://localhost and test the application. It will be lightning fast. However, this means that the code on the container is what was copied over when docker-compose up
was executed. Any changes to local code will not be reflected. There is a solution to this, and it's in the form of 'dev.yml'. This extends the docker-compose.yml file to mount the local volume onto the web root.
docker-compose up -f docker-compose.yml -f dev.yml
Now you can head to http://localhost, make some changes, and refresh, and you will see that it's just as though you're coding locally. Hurrah!
Note: There is a known bug with Docker for Mac, which means that the mounted volume has a bit of latency which can affect load times unless you make use of OPCache in dev mode. However, this is being worked on.
So now what? We have some shiny Docker containers that are working brilliantly together for our PHP app. Great for development, but what about the real world?
Our next topic will cover how to use Terraform to create a number of servers that will create 3 Docker Managers as well as a number of Docker Slave nodes.
Unfortunately, the CoreOS (great for Docker) image provided doesn't have Docker Swarm, as this is still in the Beta channel. First we will have to create a new Docker Swarm enabled image using Packer, so let's go ahead and do that first!
3. Using Packer to Create a New Image in Cloud-Native Infrastructure
Packer is another tool from Hashicorp which is comprised of a set of builders, and provisioners. It supports many builders such as AWS (AMI), Azure, DigitalOcean, Docker, Google, VirtualBox, VMWare, and, of course, the one we need; OpenStack. There are some others that it supports too which is great if you need it!
In terms of provisioning, you can use most of the popular tools such as Ansible, Puppet or Chef, as well as PowerShell and standard shell scripts.
For us, all we need to do is to take the stock image of CoreOS and tell it to use the Beta channel, which includes Docker Swarm, this can be done by modifying this file:
/etc/coreos/update.conf
...with this data:
GROUP=beta
Currently, Docker Swarm doesn't work with docker-compose.yml files. There is an experimental feature with Stacks and Bundles, but at the time of writing, it's not made its way into any of the CoreOS builds and has not been mapped out fully in the Docker pipeline yet. However, we'll also install Docker Compose onto CoreOS whilst we're provisioning, as it's a great tool for testing.
As mentioned, we are going to use the OpenStack builder, so here is our 'builder' entry:
"builders": [
{
"type": "openstack",
"image_name": "CoreOS-Docker-Beta-1-12",
"source_image": "8e892f81-2197-464a-9b6b-1a5045735f5d",
"flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
"ssh_keypair_name": "ukcloudos",
"ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
"use_floating_ip": true,
"floating_ip_pool": "internet",
"ssh_username": "core",
"ssh_pty" : true
}
],
The type is required and must state the builder-type you're using, whereas the imagename should be set to whatever you want your new image to be called. Sourceimage is the original image that is in Glance already. The builder also wants to know a flavor of the builder, I'm choosing a small instance as this is only to provision.
Note: Ensure that you are using an existing keypair name that is in your OpenStack project.
So, now that we have a builder, along with connectivity, let's provision it:
"provisioners": [
{
"type": "shell",
"inline": [
"sudo sh -c 'echo GROUP=beta > /etc/coreos/update.conf'",
"sudo systemctl restart update-engine",
"sudo update_engine_client -update",
"sudo sh -c 'mkdir /opt/'",
"sudo sh -c 'mkdir /opt/bin'",
"sudo sh -c 'curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose'",
"sudo sh -c 'chmod +x /opt/bin/docker-compose'"
]
},{
"type": "file",
"source": "/Users/bobby/.ssh/ukcloudos",
"destination": "/home/core/.ssh/key.pem"
}
]
Given the simplicity of what we're doing, I'm just using shell commands which just updates CoreOS to use the beta channel, and in turn installs the latest Beta build of Docker, along with installing Docker Compose.
You'll also notice that we're copying over an ssh key. This is an important piece of the puzzle later on when we need multiple servers to be able to communicate with each other.
All you need to do to kick off this build is:
$ packer build ./packer/template.json
If you now view your images, either using the command line or the control panel, you will see your new image is ready to be consumed. Feel free to create a test instance using this image and type the following command:
docker version
You will see you are on at least 1.12.1, which includes Swarm. If you'd like to verify Docker Swarm is working, you can type the following command:
docker swarm init
Hopefully, everything worked perfectly for you. If not, feel free to view the full source code of this example here: https://github.com/UKCloud/openstack-packer/tree/docker-beta
4. Using Terraform to Create Your Infrastructure
Yet another tool from Hashicorp, an amazing one, Terraform allows infrastructure to be written as code aka IaC, but not only that, it's Immutable. No matter how many times you execute it, you'll get the same end result. Some other previous tools would be more procedural - take a shell script for example; if you ask the shell script to create 5 servers, and run it 5 times, you'll end up with 25 servers. Terraform is clever, as it maintains state. If you ask it to create 5 servers, it will create 5. Run it again, and it will know you already have 5. Ask it to create 8, it will calculate that you already have 5, and simply add an extra 3. This flexibility is amazing and can be used for magnificent things.
All that being said, this is not a Terraform tutorial. This is a tutorial how to make use of Terraform to spin up some Docker Managers and some Docker slaves so that we can deploy our Dummy PHP App. It's probably best to first take a look at the full main.tf file:
provider "openstack" {
}
resource "openstack_compute_keypair_v2" "test-keypair" {
name = "ukcloudos"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDggzO/9DNQzp8aPdvx0W+IqlbmbhpIgv1r2my1xOsVthFgx4HLiTB/2XEuEqVpwh5F+20fDn5Juox9jZAz+z3i5EI63ojpIMCKFDqDfFlIl54QPZVJUJVyQOe7Jzl/pmDJRU7vxTbdtZNYWSwjMjfZmQjGQhDd5mM9spQf3me5HsYY9Tko1vxGXcPE1WUyV60DrqSSBkrkSyf+mILXq43K1GszVj3JuYHCY/BBrupkhA126p6EoPtNKld4EyEJzDDNvK97+oyC38XKEg6lBgAngj4FnmG8cjLRXvbPU4gQNCqmrVUMljr3gYga+ZiPoj81NOuzauYNcbt6j+R1/B9qlze7VgNPYVv3ERzkboBdIx0WxwyTXg+3BHhY+E7zY1jLnO5Bdb40wDwl7AlUsOOriHL6fSBYuz2hRIdp0+upG6CNQnvg8pXNaNXNVPcNFPGLD1PuCJiG6x84+tLC2uAb0GWxAEVtWEMD1sBCp066dHwsivmQrYRxsYRHnlorlvdMSiJxpRo/peyiqEJ9Sa6OPl2A5JeokP1GxXJ6hyOoBn4h5WSuUVL6bS4J2ta7nA0fK6L6YreHV+dMdPZCZzSG0nV5qvSaAkdL7KuM4eeOvwcXAYMwZJPj+dCnGzwdhUIp/FtRy62mSHv5/kr+lVznWv2b2yl8L95SKAdfeOiFiQ== opensource@ukcloud.com"
}
resource "openstack_networking_network_v2" "example_network1" {
name = "example_network_1"
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "example_subnet1" {
name = "example_subnet_1"
network_id = "${openstack_networking_network_v2.example_network1.id}"
cidr = "10.10.0.0/24"
ip_version = 4
dns_nameservers = ["8.8.8.8", "8.8.4.4"]
}
resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
name = "example_secgroup_1"
description = "an example security group"
rule {
ip_protocol = "tcp"
from_port = 22
to_port = 22
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "tcp"
from_port = 80
to_port = 80
cidr = "0.0.0.0/0"
}
rule {
ip_protocol = "icmp"
from_port = "-1"
to_port = "-1"
self = true
}
rule {
ip_protocol = "tcp"
from_port = "1"
to_port = "65535"
self = true
}
rule {
ip_protocol = "udp"
from_port = "1"
to_port = "65535"
self = true
}
}
resource "openstack_networking_router_v2" "example_router_1" {
name = "example_router1"
external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}
resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
router_id = "${openstack_networking_router_v2.example_router_1.id}"
subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}
resource "openstack_networking_floatingip_v2" "example_floatip_manager" {
pool = "internet"
}
resource "openstack_networking_floatingip_v2" "example_floatip_slaves" {
pool = "internet"
}
data "template_file" "cloudinit" {
template = "${file("cloudinit.sh")}"
vars {
application_env = "dev"
git_repo = "${var.git_repo}"
clone_location = "${var.clone_location}"
}
}
data "template_file" "managerinit" {
template = "${file("managerinit.sh")}"
vars {
swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
}
}
data "template_file" "slaveinit" {
template = "${file("slaveinit.sh")}"
vars {
swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
node_count = "${var.swarm_node_count + 3}"
}
}
resource "openstack_compute_instance_v2" "swarm_manager" {
name = "swarm_manager_0"
count = 1
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "7d73f524-f9a1-4e80-bedf-57216aae8038"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.cloudinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
}
provisioner "remote-exec" {
inline = [
# Create TLS certs
"echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
"docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
"sudo docker swarm join-token --quiet worker > /home/core/worker-token",
"sudo docker swarm join-token --quiet manager > /home/core/manager-token"
]
connection {
user = "core"
host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
}
}
}
resource "openstack_compute_instance_v2" "swarm_managerx" {
name = "swarm_manager_${count.index+1}"
count = 2
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "7d73f524-f9a1-4e80-bedf-57216aae8038"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.managerinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
}
}
resource "openstack_compute_instance_v2" "swarm_slave" {
name = "swarm_slave_${count.index}"
count = "${var.swarm_node_count}"
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.slaveinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
}
}
Alternatively, you can view the full example on GitHub: https://github.com/UKCloud/openstack-terraform/tree/docker-swarm
Creating the First Docker Manager node
Assuming you're all good with the basic setup of a network, security groups, floating IP addresses, and routing we'll head straight to the creation of our Docker Swarm.
To do this, what we're going to do is create 1 Docker Manager, which will initiate the 'docker swarm init' command.
Main.tf
...
data "template_file" "cloudinit" {
template = "${file("cloudinit.sh")}"
vars {
application_env = "dev"
git_repo = "${var.git_repo}"
clone_location = "${var.clone_location}"
}
}
resource "openstack_compute_instance_v2" "swarm_manager" {
name = "swarm_manager_0"
count = 1
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "7d73f524-f9a1-4e80-bedf-57216aae8038"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.cloudinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
}
provisioner "remote-exec" {
inline = [
# Bring up the Swarm!
"echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
"docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
"sudo docker swarm join-token --quiet worker > /home/core/worker-token",
"sudo docker swarm join-token --quiet manager > /home/core/manager-token"
]
connection {
user = "core"
host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
}
}
}
...
So, what does this do? It's mostly self explanatory. We're bringing up an instance using the new CoreOS instance, and running a few shell commands. Amongst the shell commands is the swarm init command, which is advertising on the IP address allocated to the machine.
The next two commands are the really important ones, though; these are the commands that grab the 'join tokens' that all the other nodes will need to be able to join the swarm. For now, we're saving the tokens to the home directory, so that later nodes can SSH to this server, and grab the tokens (told you there was a reason we needed the SSH key adding to our CoreOS image!).
With just this one instance, we have an active swarm, but one that doesn't do a great deal. The next thing we need to do is create the services, and for we're using a template file to make use of the 'cloud init' functionality within OpenStack. The cloud init file looks like this:
cloudinit.sh
#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.
docker pull bobbydvo/ukc_nginx:latest
docker pull bobbydvo/ukc_php-fpm:latest
docker network create --driver overlay mynet
docker service create --update-delay 10s --replicas 1 -p 80:80 --network mynet --name web bobbydvo/ukc_nginx:latest
docker service create --update-delay 10s --replicas 1 -p 9000:9000 --network mynet --name php-fpm bobbydvo/ukc_php-fpm:latest
#The above services should be created by the DAB bundle..
#..but Docker 1.13 is changing the work bundles & stacks work so parking for now.
This tells the Docker Manager to fire off these commands when it first boots up.
If you visit the external IP address at this point, you should see some text like this, "Welcome to your php-fpm Docker container.". This is because our application has not yet been deployed, we'll get to that in a bit.
First, we need to create some more Docker Managers, some Docker Slaves, and get them all to join the Swarm!
Note: We're initially deploying the base images, as we've not yet configured our Jenkins job to deploy the application. When we get that far, you may want to retrospectively update this cloudinit file with the Docker Image names of the built application name, but it's not essential. Don't worry about it!
Adding More Nodes to the Swarm
Adding more Docker Managers is now fairly simple, but we can't just increase the count of the first Docker Manager as that has special commands to initiate the Swarm. This second instruction below will allow us to configure as many more managers as we desire. Once up and running, these 'secondary masters' will be no less important than the first Manager, and we will have 3 identical instances with automatic failover.
Whilst Docker Swarm doesn't specifically follow the Raft consensus like some other services, having at least three is important, whilst five is strongly recommended in production. This gives Docker Swarm the ability to still function whilst some nodes are out of service for whatever reason.
resource "openstack_compute_instance_v2" "swarm_managerx" {
name = "swarm_manager_${count.index+1}"
count = 2
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "7d73f524-f9a1-4e80-bedf-57216aae8038"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.managerinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
}
}
The important part now, is to instruct each 'secondary master' to join the swarm as soon as they have booted up. We can do this with another cloud init script. For annotation purposes, I have called this 'managerinit.sh':
Managerinit.sh
#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.
sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/manager-token /home/core/manager-token
sudo docker swarm join --token $(cat /home/core/manager-token) ${swarm_manager}
Due to this being the first time the server will have connected, we're passing a few options to prevent the scp command from prompting any input. Ultimately, though, we're connecting to the 'primary master' to grab the join-tokens that we mentioned earlier in the article. The join tokens are the only way we can ensure we join the correct swarm. The only parameters we are passing in is the IP address to the first Swarm Manager.
If you were to execute terraform as-is, without any slaves, and then ssh'd to the floating IP, you could run the following command:
docker node ls
And you will see a list of the masters, one of which will show it's the leader, whereas the others will show they're *******.
Right now, masters will be able to serve your services in just the same way that slaves will be able to in future. In fact, you could just create a Swarm full of Masters if you like!
Adding Slaves to the Swarm
The code to add more slaves is similar to the masters, only this time the count is coming as an input from the variables.tf file. This is so that we can have as many nodes as we require.
resource "openstack_compute_instance_v2" "swarm_slave" {
name = "swarm_slave_${count.index}"
count = "${var.swarm_node_count}"
#coreos-docker-beta
image_id = "589c614e-32e5-49ad-aeea-69ebce553d8b"
flavor_id = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
key_pair = "${openstack_compute_keypair_v2.test-keypair.name}"
security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]
user_data = "${data.template_file.slaveinit.rendered}"
network {
name = "${openstack_networking_network_v2.example_network1.name}"
}
}
The main difference between the slaves and masters is the cloud init file. In the file below, we're doing a number of things:
- Copying the worker 'join token' from the master.
- Joining the node into the Docker Swarm.
- Scaling the active services down to a minimum of 3.
- Scaling the active services back up to the number of nodes we require.
Slaveinit.sh
#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.
sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/worker-token /home/core/worker-token
sudo docker swarm join --token $(cat /home/core/worker-token) ${swarm_manager}
# Horrible hack, as Swarm doesn't evenly distribute to new nodes
# https://github.com/docker/docker/issues/24103
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=3"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=3"
# Scale to the number of instances we should have once the script has finished.
# This means it may scale to 50 even though we only have 10, with 40 still processing.
# Hence the issue above.
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=${node_count}"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=${node_count}"
The copying of the token and joining the swarm is fairly trivial, and is fairly similar to what happens with the master nodes. What we're also doing though, is issuing a command to the Docker manager to instruct it to scale the service across x nodes. i.e. the number of nodes we are scaling to. Without adding this code here, one would have to scale the infrastructure, and then scale the Docker Service manually. By including this command in the infrastructure as code file, we can simply scale Docker Swarm just from the 'terraform apply' command.
Note: As the annotations suggest, the scaling solution here is not so elegant. I will explain more:
Suppose we have three Docker Managers and we add three Docker Slaves... as the first Docker Slave is created, it will scale the swarm using the 'docker service scale web=6' command, as can be seen in the code above. However, the moment the first Docker Slave issues that command, we only have four nodes. So we have six containers running on four nodes. Not a big problem, as we're about to add another two Docker Slave nodes. However, when the second and third slave nodes join the swarm, Docker doesn't allocate any services to those nodes. The only way to allocate services to said nodes, is to scale down, and back up again, which is precisely what the code is doing above. Docker is aware of this 'feature' and they are looking at creating a flag to pass onto the Docker Swarm join command to redistribute the services.
5. Deploying the Application
We now have three Docker Managers and three Docker Slaves all running in an active Docker Swarm. We can scale up, and we can scale down. This is simply awesome, but not so fun if we don't have our app deployed to test this functionality.
To deploy the app we're going to set up a Jenkins job which will be fired either manually or when a commit has been made.
The Jenkins job should be configured with the below commands, however, if you don't want to create a Jenkins job, you can always just throw it into a shell script and modify the variables.
set -e
DUMMY_VERSION=$BUILD_VERSION
NGINX_VERSION='latest'
sudo docker-compose build
sudo docker run -i bobbydvo/dummyapp_php-fpm /srv/vendor/bin/phpunit -c /srv/app/phpunit.xml
# tag & push only if all the above succeeded (set -e)
sudo docker tag bobbydvo/dummyapp_php-fpm:latest bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:latest
ssh core@51.179.219.14 "docker service update --image bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION php-fpm"
ssh core@51.179.219.14 "docker service update --image bobbydvo/ukc_nginx:$NGINX_VERSION web"
Note: You will have to ensure that the jenkins
user is allowed sudo access
What does this job do then? What we're doing is telling docker-compose to execute the docker-compose.yml file that we included in step 2 for our dev environment. This will instruct Docker to build a new Docker container with the latest code, and we then run our unit tests on the newly built container. As we're using the 'set -e' instruction, we'll only continue to the next step if the previous step was successful. With that in mind, if our Unit tests pass, we then tag the latest image and push to Docker hub.
The final step is to connect to the Docker Manager and update the service with the latest container. When creating the service, we specified a rolling update of 10s, so as soon as this command is issued, it will take approximately 1 minute for all our nodes to be updated.
You can now visit the floating IP that you've allocated to the Docker Manager and you will see that Docker automatically load balances the traffic amongst all the nodes. Simply amazing!
6. Scaling the Application
The final step, assuming your application is struggling to cope with the load, is to add more nodes. You can modify the value in the variables.tf:
Variables.tf
variable "swarm_node_count" {
default = 10
}
And apply!
terraform apply
Literally that simple. You can scale to 10 nodes, 50 nodes, 1000 nodes, and your application will be automatically load balanced via Docker Swarm. What's better, you know that each and every node running is an exact replica, provisioned in exactly the same way, running the exact same code.
I hope you've been able to follow this tutorial, along with understanding all the code examples. However, if you have any comments or questions, please leave them below or tweet me: @bobbyjason.
Many thanks!
Opinions expressed by DZone contributors are their own.
Comments