How To Run a Docker Container on the Cloud: Top 5 CaaS Solutions
In this article, readers will learn the advantages and disadvantages of the top 5 CaaS solutions that help Engineers run Docker Containers on the cloud.
Join the DZone community and get the full member experience.
Join For FreeIn the past few years, there has been a growing number of organizations and developers joining the Docker journey. Containerization simplifies the software development process because it eliminates dealing with dependencies and working with specific hardware. Nonetheless, the biggest advantage of using containers is down to the portability they offer. But, it can be quite confusing how to run a container on the cloud. You could certainly deploy these containers to servers on your cloud provider using Infrastructure as a Service (IaaS). However, this approach will only take you back to the issue we mentioned previously, which is, you’d have to maintain these servers when there’s a better way to do that.
Table of Contents
- How To Run a Docker Container on the Cloud
- Using a Container Registry
- Using Container-as-a-Service
- Why Should I Use CaaS?
- What Are the Best CaaS Solutions?
- AWS ECS
- AWS Lambda
- AWS App Runner
- Azure Container Instances
- Google Cloud Run
- Conclusion
How To Run a Docker Container on the Cloud
Using a Container Registry
You are probably reading this if your container runs locally but are wondering how to run it on the cloud. In this case, the next step to take to bring it to the cloud is to select a container registry that will act as a centralized location to store your containers. Essentially, you will need to push your container to this registry, whether public or private, so your image can be distributed from there.
Using Container-as-a-Service
Containers-as-a-Service (CaaS) is a concept that allows companies to directly run their container on the cloud using any given provider of choice. With CaaS, the infrastructure required to run containers such as orchestration tools, e.g, Docker Swarm, Kubernetes, OpenStack, etc., as well as cluster management software are non-existent for the user. As a side note, CaaS joins the already established cloud service models such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).
Why Should I Use CaaS?
Some of the advantages of using Container-as-a-Service are:
- Cost reduction: it eliminates the time, effort, and money spent on maintaining secure infrastructure to run your container.
- Flexibility: you can easily move from cloud to cloud or even back to your on-prem infrastructure, freeing you from vendor lock-in.
- Speed: Since the underlying infrastructure abstracts from it, you can deploy your container quicker.
Overall, CaaS will not only simplify the running process of a software application but also improve overall security around it as most CaaS solutions offer vulnerability scans. Furthermore, you don’t have to worry about managing the hardware that will run your container.
What Are the Best CaaS Solutions?
When choosing a CaaS solution, some of the key considerations include:
- Can it operate multi-container applications?
- What networks and storage functions are available?
- Which file format does it support?
- How is storage achieved?
- Which billing model does it use?
Amazon Elastic Container Service (Amazon ECS)
Amazon ECS is a scalable container orchestration platform by AWS designed to run, stop, and manage containers in a cluster environment by using task definition. Essentially, task definition is where you define:
- The container to use.
- How many containers to run.
- How your containers are linked.
- What resources your containers use.
Note: AWS ECS also supports mounting EFS volumes.
With that in mind, you have two ways of using ECS:
- By using EC2 Instances.
- By using Fargate.
ECS With EC2
In this case, containers will be deployed to EC2 Instances (VMs) created for the cluster. The merits include:
- Full control over the type of EC2 instance used. Your container is used for machine learning and is GPU-oriented, meaning you can choose to run on an EC2 instance that is optimized for this usage.
- Reduce your cost by using Spot instances, which can reduce your cost by up to 90%.
On the other hand, the only demerit is that:
- You are responsible for patching, managing network security, and the scalability associated with these instances.
Pricing:
When it comes to cost, you are charged for the EC2 instances run within your ECS cluster and VPC networking.
ECS With Fargate
AWS Fargate was launched in 2017, and with this model, you don’t have to be worried about managing EC2 Instances. AWS Fargate directly manages the underlying servers required to run your container by pre-configuring a cluster for you. You will just need to add your workload to it. The advantages include:
- No infrastructure to manage.
- AWS deals with availability and scalability of your container application.
- Fargate Spot, based on similar principles as the Spot instances, AWS mentions a cost reduction of up to 70%.
In contrast, the downside is:
- Only one networking mode is currently supported (awsvpc), which might limit you with the network layers in some specific scenarios you might try to achieve.
A recent report by Datadog, mentions that, in 2021, 32% of AWS container environments were using AWS Fargate. This trend confirms that companies are switching gradually to serverless environments.
Pricing:
Fargate’s pricing is based on a “pay as you go” model. There are no upfront costs and you only pay for the compute and memory resources consumed. Here’s a pricing example for the region US West (Oregon):
- $0.04048 per vCPU per hour.
- $0.004445 per gigabyte per hour.
The table below will help you better understand the terminology used with ECS/Fargate and Kubernetes:
Infrastructure Layer | Component | ECS Fargate | Kubernetes |
---|---|---|---|
Workload | Deployment Unit Desired State Access Endpoint |
Task Service ALB |
Pod Deployment Ingress |
Control Plane | API Endpoint Scheduler Controller State Management |
Frontend Service Capacity Manager Cluster Manager State DB |
Kube-apiserver Kube-scheduler Kube-controller etcd |
Data Plane | Guest OS Agent Container Runtime Network |
Amazon Linux 2 Fargate Agent Docker ENI/VPC |
Linux/Windows Kubelet Containerd CN/Kubeproxy |
AWS Lambda
A serverless service by AWS whereby you bring your code, whether it is Java, Go, C#, Python, Powershell, Node.js or Ruby, and Amazon will run it into a callable function that complies with their language’s Lambda interface. Lambda functions are mostly called by connecting them to AWS API Gateway, which exposes the functions as REST API calls. You might be wondering why we are even mentioning AWS Lambda at this point as there’s no link with Docker or container images. According to AWS, in December 2020, AWS Lambda began supporting running container images up to 10GB in size. Using Lambda to run a Docker container on the cloud gives you:
- Scalability: Lambda will automatically create new instances of your function to meet demand as it can scale up to 500 new instances every minute.
However, you may have to contend with:
- Reduced portability: Since Lambda is AWS’ proprietary serverless tech, you will need to significantly adjust your function to move to another cloud provider.
- Slow scalability: When we mentioned how Lambda can spin up new instances, we weren’t talking about its speed. A cold start for your function will require time and has a hard impact on Java and .NET applications.
- Can’t run long-running tasks: Lambda functions can only run up to 15 minutes.
Pricing:
You are charged by the number of requests for their functions and by the duration (time spent to execute the function). Pricing will also vary depending on the amount of memory you allocate to your function. Nonetheless, Lambda offers a free tier that works even if you use your annual AWS Free Tier term, which offers 400,000 GB-seconds of compute time every month.
AWS App Runner
Launched in May 2021, AWS App Runner facilitates bringing a web application to the cloud without worrying about scaling or the infrastructure associated with it. Essentially, it simply runs Amazon ECS with Fargate to execute your container but you don’t need to set up or configure anything related to Fargate to get going. It can run in build mode, which pulls code from your GitHub repository and builds the application at any commits you might push to your main branch. Alternatively, it can run in container mode, where you will connect your container registry (only AWS ECR is supported) and point to your image. If you want to see what AWS has planned for App Runner, they outline everything you need to know with their detailed roadmap.
The core advantage of AWS App Runner when it comes to running a Docker container on the cloud is that:
- It is easy to configure and provides a simple way to get a web application to run in the cloud.
On the other hand, the disadvantages include:
- Build mode only supports Python and Node.js runtimes.
- Can’t scale down to 0, you need to pay for at least one instance.
- Build mode has no integration with AWS CodeCommit or other Source Control Management, meaning you will be forced to use GitHub.
- App cannot communicate with private VPC: More details here.
Pricing:
You are billed for what you use. For example, a minimal instance (1vCPU, 2GB) will cost $0.078 per hour or around $56.00 per month, plus a little extra for automatic build and deployment, if it is always running:
- $0.064 per vCPU per hour.
- $0.007 per gigabyte per hour.
- Automatic deployment: $1 per application per month.
- Build Fee: $0.005/build-minute.
Detailed pricing information is available on their website.
Azure Container Instances (ACI)
Microsoft was a late entrant in the CaaS market since Azure Container Instances was announced in July 2017. It offers:
- Support for persistent storage by mounting Azure file share to the container.
- Co-scheduled groups, Azure supports the scheduling of multi-container groups that share a host machine, local network, or storage.
- Container is in your virtual network and can communicate with other resources in that network.
- Full control over the instance that runs your container. Adding GPU compute power is not a problem with ACI.
The only downside associated with it is that it:
- Only supports calling Docker containers from a registry.
Pricing:
Billing is per hour of vCPU, Memory, GPU, and OS used. Using a container that requires a GPU or Windows will be more expensive.
- $0.04660 per vCPU per hour.
- $0.0051 per gigabyte per hour.
Google Cloud Run
Google Cloud Run, GCP’s CaaS solution, became available in November 2019.
Similar to the other options of running Docker containers in the cloud listed above, this service is built on the Knative platform based on Kubernetes. Similar to AWS App Runner, you can choose to point to a container registry or repository that contains your application code.
Benefits:
- Use of secrets from Google Secret Manager.
- Deployment from source code supports Go, Python, Java, Node.js, Ruby, and more.
- Support traffic splitting between revisions.
Disadvantage:
- Not directly related to Cloud Run but the only disadvantage is connected to GCP as a whole, whereby there is a limited number of regions compared to Azure or AWS, for instance.
Pricing:
Anyone could try Cloud Run for free with the $300 credit that GCP offers to their new customers. After that, you’ll be billed once you go over the free tier.
The free monthly quotas for Google Cloud Run are as follows:
- CPU: The first 180,000 vCPU-seconds.
- Memory: The first 360,000 GB-seconds.
- Requests: The first 2 million requests.
- Networking: The first 1 GB egress traffic (platform-wide).
Once you bypass these limits; however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:
- CPU: $0.00144 per vCPU per minute.
- Memory: $0.00015 per GB per minute.
- Requests: $0.40 per 1 million requests.
- Networking: $0.085 per GB delivered.
Conclusion
Cloud providers are always innovating to fulfill the needs of customers by continually bringing new services. A minor concern is that the delivery of more services and features makes it even more confusing for developers and organizations. Although there may be slight differences in AWS, Azure, and Google Cloud offerings, it is evident that they all share a common goal. They are all seeking to simplify running Docker containers on the cloud orchestration while maintaining the flexibility required to support a wide range of developer use cases.
Published at DZone with permission of Chase Bolt. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments