Deploying Spring Boot Microservices to Multiple AWS EC2 Instances
Learn how to create and deploy microservices instances to multiple AWS EC2 instances.
Join the DZone community and get the full member experience.
Join For FreeIn a previous tutorial we deployed services in a Docker Swarm using Docker stacks. We were using Play With Docker to simulate multiple nodes in Docker Swarm. In this tutorial, we will be starting multiple AWS EC2 instances and deploying the microservices on them using Docker Swarm.
You may also enjoy:
Running Services Within a Docker Swarm (Part 1)
This tutorial is explained in the following YouTube video.
Getting Started
Starting Multiple EC2 Instances Using Docker Swarm
For this, you will need to register with Amazon web services and create an AWS account. When registering the service we will need to provide credit card details. AWS is free for a period of 1 year but there are some usage limitations. If these are crossed, then AWS will charge you. In this tutorial, we will be starting two AWS EC2 instances. Once we are done with this tutorial, do remember to stop/terminate the EC2 instances.
Once we have registered with AWS go to the Services section and select EC2.
We will see the EC2 Dashboard, and we will see here that there are zero instances running.
From the left side menu select the Security Group.
For running Swarm in containers, Docker has created rules. We need to open the following ports:
Create new security group named Docker with the following inbound and outbound rules.
Next, go again to the EC2 home page and click on Launch Instance.
Select the Amazon Linux 2 AMI (HVM) Machine.
Select the Instance Type as t2.micro, which is the default option. Select Configure Instance Details.
Keep the default Configure Instance Details as provided and select Add Storage.
Keep the default Storage setting and click Add Tags.
In the Tags section add a new tag named ec1 and select Configure Security Group.
In the Configure Security Group section, select the existing security group named Docker that we had created previously.
Finally, launch a new instance. Create a new key pair named ec1 and download the key named ec1.pem.
Again, follow all the steps mentioned above for creating another EC2 instance. Only add the tag as ec2 and when launching the instance, don't create a new key pair but the existing key pair named ec1.pem So we have launched two EC2 instances.
Next, using Putty, we will be connecting with them. For this, we will first need to convert the ec1.pem key to ec1.ppk format. This is done using PuttyGen using the following steps:
Open PuttyGen
Select the ec2.pem file from where you have stored it. Select save private key and save the key as ec2.ppk.
Next we will be connecting both the EC2 instances using Putty.
Open Putty instance. In the AWS portal, when you select EC2, there is a connect button which gives us details regarding connecting to the EC2 instance.
In Putty, enter the Host from above as ec2-user@ec2-18-216-91-80.us-east-2.compute.amazonaws.com and in the SSH->Auth, select the ec1.ppk key. Click the connect button.
The EC2 instance is now connected using Putty.EC2 instance using putty
Similarly, connect to the second EC2 instance.
Starting Services on AWS EC2 Instances Using Docker Swarm
In both instances, install Docker by starting the Docker service on both EC2 instances.
sudo yum install docker
In the EC2 instance that will be the leader node, start Docker Swarm.
sudo docker swarm init
In the second EC2 instance which will be the worker node use the join command as follows:
sudo docker swarm join --token <Token>
We can list the nodes in the Docker Swarm as follows:
sudo docker node ls
Now as in the previous tutorial we will be creating the Docker Stack file named docker-compose.yaml as follows:
sudo vi docker-compose.yaml
The content of the file will be as follows:
version: "3"
services:
consumer:
image: javainuse/employee-consumer
networks:
- consumer-producer
depends_on:
- producer
producer:
image: javainuse/employee-producer
ports:
- "8080:8080"
networks:
- consumer-producer
networks:
consumer-producer:
Next deploy the Docker Stack to multiple AWS EC2 instances using the above-created stack file:
sudo docker stack deploy -c docker-compose.yaml dockTest
We can list the running services in Docker Swarm as follows:
sudo docker service ls
Also, by listing the running containers, we can find the employee consumer and employee producer services are running in which EC2 instances. Below we can see that employee consumer is running in the Manager Node while the employee producer service is running in the Worker Node.
sudo docker container ls
Also if we check the employee consumer logs, it can be seen that the REST service exposed by the employee producer is successfully consumed by the employee consumer.
sudo docker container logs l3
EC2 instance Docker container logsEC2 instance Docker service logs
Further Reading
Opinions expressed by DZone contributors are their own.
Comments