How to Deploy Apps Effortlessly With Packer and Terraform
Learn how to orchestrate a whole deployment with just a couple of config files using Packer and Terraform.
Join the DZone community and get the full member experience.
Join For FreeWith Packer and Terraform, you can easily create a full DevOps deployment to maintain release cycles and infrastructure updates for your applications on Alibaba Cloud.
Alibaba Cloud published a very neat white paper about DevOps that is very interesting to read. It shows how "DevOps is a model that goes beyond simple implementation of agile principles to manage the infrastructure. John Willis and Damon Edwards defined DevOps using the term CAMS: Culture, Automation, Measurement, and Sharing. DevOps seeks to promote collaboration between the development and operations teams."
This means, roughly, that there is a new role or mindset in a team that aims to connect both software development and infrastructure management. This role requires knowledge of both worlds and takes advantage of the cloud paradigm that nowadays grows in importance. But DevOps practices are not limited to large enterprises. As developers, we can easily incorporate DevOps in our daily tasks. With this tutorial, you will see how easy is to orchestrate a whole deployment with just a couple of config files. We will be running our application on an Alibaba Cloud Elastic Compute Service (ECS) instance.
What Is Packer?
Packer is an open-source DevOps tool made by Hashicorp to create images from a single JSON config file, which helps in keeping track of its changes in the long run. This software is cross-platform compatible and can create multiple images in parallel.
If you have Homebrew, just type brew install packer
to install it.
It basically creates ready-to-use images with the Operating System and some extra software ready to use for your applications, like creating your own distribution. Imagine you want Debian but with some custom PHP Application you made built-in by default. Well, with Packer this is very easy to do, and in this how-to, we will create one.
What Is Terraform?
When deploying we have two big tasks to complete. One is to pack the actual application in a suitable environment, creating an image. The other big task is to create the underlying infrastructure in where the application is going to live, this is, the actual server to host it.
For this, Terraform, made by the same company as Packer, Hashicorp, came to existence as a very interesting and powerful tool. Based in the same principles as Packer, Terraform lets you build infrastructure in Alibaba Cloud by just using a single config file, in the TF format this time, also helping with versioning and clear understanding of how all the bits are working beneath your application.
To install Terraform and the Alibaba Cloud Official provider, please follow the instructions in this other article.
What We Want to Achieve
As mentioned at the top of the article, we are going to create and deploy a simple PHP application in a pure DevOps way, this is, taking care of every part of the deployment, from the software running it to the subjacent infrastructure supporting it.
Steps
For the sake of "Keeping It Simple Stu*" principle (KISS), we will create a docker-compose based application to retrieve the METAR weather data from airports, using its ICAO airport code and pulling the data from the National US Weather Service. Then we will create the image with Packer using Ubuntu and deploy the infrastructure using the image with Terraform.
PHP Application
Same here, for the sake of simplicity I created the application ready to use. You can find the source code on GitHub to have a look, which includes an index.php, two JavaScript files to decode the METAR data and a bit of CSS and a PNG image to make it less boring. It even indicates the direction of the wind! But don't worry, you won't need to clone the repository at this point.
The application is based in docker-compose and is something we will install as a dependency with Packer later.
Building the Image With Packer
Let's get our hands dirty! For this, we need to create a folder somewhere on our computer, let's say ~/metar-app
. Then lets cd
in and create a file named metar-build.json
with the following contents:
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"region": "{{env `ALICLOUD_REGION`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [
{
"type": "alicloud-ecs",
"access_key": "{{user `access_key`}}",
"secret_key": "{{user `secret_key`}}",
"region":"{{user `region`}}",
"image_name": "metar_app",
"source_image": "ubuntu_16_0402_64_20G_alibase_20180409.vhd",
"ssh_username": "root",
"instance_type": "ecs.t5-lc1m1.small",
"internet_charge_type": "PayByTraffic",
"io_optimized": "true"
}
],
"provisioners": [
{
"type": "shell",
"script": "base-setup"
}
]
}
And right next to it, a file named base-setup
with the following:
#!/usr/bin/env bash
apt-get update && apt-get install -y apt-transport-https ca-certificates curl git-core software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce docker-compose
curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/bin/docker-compose
mkdir /var/docker
git clone https://github.com/roura356a/metar.git /var/docker/metar
Once you have those two files, you can run packer build metar-build.json
and wait for it to finish. Please note that for this to work, you need to have three environment variables in your machine with the relevant values for you, ALICLOUD_REGION
, ALICLOUD_ACCESS_KEY
and ALICLOUD_SECRET_KEY
. This step will take a while, as it creates an ECS, installs all the software in it, then stops the instance, creates a snapshot of it, and finally creates the image of the whole system.
After the image is completed, Packer will output ==> Builds finished
. Good, we have an image ready to use! We are ready to continue to the next step.
Deploying the Infrastructure With Terraform
It's time to create the ECS Instance. For this, in the same folder, we will create a file named main.tf
with the following content:
provider "alicloud" {}
data "alicloud_images" "search" {
name_regex = "metar_app"
}
data "alicloud_instance_types" "search" {
instance_type_family = "ecs.xn4"
cpu_core_count = 1
memory_size = 1
}
data "alicloud_security_groups" "search" {}
data "alicloud_vswitches" "search" {}
resource "alicloud_instance" "app" {
instance_name = "metar_app"
image_id = "${data.alicloud_images.search.images.0.image_id}"
instance_type = "${data.alicloud_instance_types.search.instance_types.0.id}"
vswitch_id = "${data.alicloud_vswitches.search.vswitches.0.id}"
security_groups = [
"${data.alicloud_security_groups.search.groups.0.id}"
]
internet_max_bandwidth_out = 100
password = "Test1234!"
user_data = "${file("user-data")}"
}
output "ip" {
value = "${alicloud_instance.app.public_ip}"
}
And, right next to it, create a file named user-data
with the following:
#!/usr/bin/env bash
cd /var/docker/metar && docker-compose up -d
To be clear, at this moment we should have a file structure like this:
metar_app/
├── metar-build.json
├── base-setup
├── main.tf
└── user-data
Ready to deploy. Run terraform init
, then terraform plan
to check that everything is fine, and terraform apply
to launch the process.
When the infrastructure is built, Terraform will output the IP of the newly created ECS instance. Let's say, 111.111.111.111
.
Testing
If everything went well, you will be able to go to http://111.111.111.111/LESO
and visualize, in this case, the latest weather report from the San Sebastián airport, an airport with one of the most beautiful approaches in the world, located in the north of Spain.
Wrapping Up
See, with almost no effort you just created a full DevOps deployment for your application. This means that will be so much easier for you and your team to maintain anything related to its release cycles, infrastructure updates and will improve its uptime. No need to fiddle anymore directly with hosts and Linux commands in the normal day-to-day.
Published at DZone with permission of Alberto Roura, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments