Transitioning from Monolith to Microservices
You'd be hard-pressed to find a developer-centric vendor that doesn't have some mention of microservices in their blog roll or documentation.
Join the DZone community and get the full member experience.
Join For FreeHow did we get here?
In the large monolithic application development days of yesteryear, deploying an application was no trivial task. It started with a purchase order for multiple servers, followed by several days of racking, wiring, and configuring. If you were lucky, your deployment worked on the first try because all of the parties involved thought of nearly everything that could go wrong. These parameters set the stage for monoliths by requiring the fewest number of simple steps to get an application deployed.
Fast forward to today—where you can provision massive amounts of infrastructure at the click of a button in the cloud—and you find yourself living in the golden age of infrastructure automation tools. This dramatic shift in infrastructure availability is one of the main catalysts for the advent of microservices.
You might not need microservices architecture if…
The popularity of microservices is evident. You’d be hard-pressed to find a developer-centric vendor that doesn’t have some mention of microservices in their blog roll or documentation. For example, Kong's offerings of Kong Gateway and Kong Mesh specialize in supporting businesses moving from a monolith to microservices architecture.
However, it’s important to know that microservices aren’t a magic bullet for every business application. Here are some good indicators that your current monolith architecture may be sufficient.
You’re not having trouble scaling.
There are business domains with relatively steady traffic to their application and might not need the dynamic scaling that comes with using a microservices architecture. A key benefit to microservices is the ability to scale on demand. If scaling isn’t a challenge for your business, you may not need microservices.
Your monolithic architecture is already flexible enough to meet market demands.
Maybe you’ve had enough foresight to keep your application flexible and isolate functionality as needed. The ability to introduce new features and functionality quickly is another key tenet of microservices. If you’re already able to do this, excellent job!
You’re not having issues with deploying your application.
Speed of deployment is another key benefit of microservices. If your deployment pipeline is already fast enough for your business needs, you may not need microservices.
Why should I switch to a microservices architecture?
After evaluating your business context, you may decide that perhaps there is some room for improvement. You may recall a time when a spike in traffic overloaded your application. You also may remember that time when some backlogged technical debt needed to be rectified and blew up your delivery timeline. Then, there may have been that one time when your sales team sold a new feature to be integrated and available in months when in reality, it would take over a year to get right.
If any of those situations resonate with you, your application may benefit from transitioning to microservices. Let’s consider some key roles in your organization, along with key benefits for each role.
Corporate Stakeholder—this person cares about competitive positioning, market characteristics, and Gartner Magic Quadrant. Developing well-defined microservices allows your company to deploy functionality faster, meet market needs, and position itself as an industry leader rather than a follower.
Product/Project Manager—this person cares about team morale, accurate timelines, and supporting the business needs. Developing well-defined microservices allows your managers to give better timelines and keep the development team morale up by reducing technical debt.
Developer—this person wants to work on interesting problems, write new features, and receive recognition for their business impact. Developing well-defined microservices reduces the need to troubleshoot issues in one large, complex code base and keeps them happier by reducing the stress associated with monolith development and deployment.
What drives the transition to microservices?
Each of the above key players can be a driver to implementing microservices, but most often, the development team drives the transition. After all, developers implement the microservices. Discussions between the personas that lead to individual microservice development are often very abstract, so it’s important to read between the lines.
Corporate Stakeholder: We need to be able to respond to the market faster! Competitor A already has feature X!
Product/Project Manager: We’re doing our best. There are a lot of challenges with microservices deployment and development planning.
Developer: If we didn’t have so much technical debt, we’d be able to add new features more quickly.
These sentiments—and you've likely heard statements similar to those in your organization—highlight the need to implement microservices without ever using the word microservices!
How do I measure the success of this transition?
To gauge whether or not you’re successful in your monolith to microservice transition, you must measure, measure, measure. If there’s no return on investment (ROI), there’s no reason to transition. You can only measure ROI successfully if you have the proper metrics in place for it. Let’s consider several important metrics.
How long does it take to get one code change into production?
This is an important one! With a monolith, even a relatively simple code change necessitates multi-layer testing efforts to prevent regressions. Consider how long those tests take and the person-power involved with running those tests. Then, there's the actual deployment once the tests all pass. Start your stopwatch as soon as the code commits and stop it once the change is live in production. That will be the metric to beat with your microservice development.
Are you hitting your SLA?
We’re using SLA in a relatively liberal sense since the term means different things to different businesses. Put simply, let's define an SLA as the time you need to be actively serving your primary business need. For example, if your business runs an API service for bank fraud alerts, how reliably has your service responded to customer API requests over the past month or year?
Important immeasurables
While hard numbers are important, there are other metrics that are very important but difficult to measure formally. Take periodic straw polls of the team to check in on morale and gauge the team’s overall energy. Do they feel excited to sling code and deploy? Are they more energized by their work? A happy development team is a productive development team!
How do I make the transition to microservices?
Not every business will take the same approach to the microservices transition. However, at a high level, every organization needs a plan, supporting infrastructure, supporting metrics, and some (gasp!) culture change. With those items as our baseline, let’s get into the details!
Make a plan
At its most basic, your plan of attack should include the following steps:
Start by identifying low-impact functionality in your monolith. Begin with low-impact functions, not business-critical ones. There is a slight learning curve—and with that, an element of risk—that comes with making this transition for the first time.
If you’re running an online bookstore, you probably don’t want to migrate your inventory or purchasing systems first, as these are inherently business-critical functions. Moving your book ratings or user comments to your own microservice would be a lower risk.
Determine the key components and the traffic flow involved between components. Having identified the functionality, what components in the monolith code base do you need to extract out to the microservice? Will the user be able to reach the microservice directly, or will the monolith proxy the request?
Put all of these details into a high-level objective, codifying all of the implementation steps. By anticipating and documenting your transition process for a single functionality, you set yourself up for better auditing and repeatability of that process.
Use an API gateway
Your monolith application had a single, “front door” of entry for your users. With microservices, it can seem like you now have to keep track of several “front doors.” Facilitating communication between microservices and with the outside world may seem like a daunting task.
An API gateway abstracts away the details of calling the microservices backing your business functionality, providing once again that single, “front door” through which all requests must enter. The API gateway handles authentication, authorization, and traffic routing to the correct microservice destination. Kong Gateway makes this incredibly simple. With a few commands, you’ll have the complexities of communication traffic abstracted away.
Measure, measure, measure
While there are many goals for a microservice architecture, the key wins are flexibility, delivery speed, and resiliency. After establishing your baseline for the delta between code commit and production deployment completion, measure the same process for a microservice.
Similarly, establish a baseline for “business uptime” and compare it to that of your post-microservice implementation. “Business uptime” is the uptime required by necessary components in your architecture as it relates to your primary business goals.
With a monolith, you deploy all of your components together, so a fault in one component could affect your entire monolithic application. As you transition to microservices, the pieces that remain in the monolith should be minimally affected, if at all, by the microservice components that you’re creating.
For example, if your business is a bookstore, your critical pieces of business are the products (books) and the payments system. Suppose you’ve abstracted your book ratings into a microservice. In that case, your business can still function—and would be minimally impacted if the book ratings service goes down—since what your customers primarily want to do is buy books.
Do DevOps
The last but immensely important piece of the transition is related to culture. Culture change is difficult, but it’s necessary for success in this microservices transition. No longer does the development team write an application and toss it over the wall to operations to deploy. The new normal includes CI/CD pipelines, automation, metrics, monitoring, and much more. Every team member needs to be cross-functional and collaborate heavily to achieve success!
Demonstration by example
For our example, we’ll take a very common monolith architecture which consists of frontend and backend code, and show how you can extract part of the functionality into its own microservice.
In this case, we’ll use a Django application backed by an SQLite database that has two “apps”: movies and ratings. We’ll extract the ratings app into its own microservice hosted behind Kong Gateway, which we’ve set up using the instructions for the official Kong image.
Setting up
We won’t go into too much detail on each of the commands in this section since the only goal is to get a project template set up for you to follow along.
$ mkdir moviedb && cd moviedb/ $ python3 -m venv venv && source venv/bin/activate $ pip install django && pip freeze >> requirements.txt $ django-admin startproject moviedb . $ ./manage.py startapp movies $ ./manage.py startapp ratings $ ./manage.py migrate
At this point, we have a full Django setup with a default SQLite database and all of our migrations in place. We’ll start with two models:
# movies/models.py from django.db import models class Movie(models.Model): title = models.CharField(max_length=300, null=False) released = models.DateField() description = models.CharField(max_length=1000)
# ratings/models.py
from django.db import models
from django.core.validators import MinValueValidator, MaxValueValidator
from movies.models import Movie
class Rating(models.Model):
movie = models.ForeignKey(Movie, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True)
explanation = models.CharField(max_length=5000)
rating_value = models.IntegerField(
null=False,
validators=[MinValueValidator(1), MaxValueValidator(10)]
)
To explain briefly, we have two models which get persisted to the database: Movie
and Rating
. Rating
has a field of movie
which relates to the Movie
model to establish a relationship between these two models. In most cases, you’ll have one movie with many ratings. Operationally, that means the load will increase as the number of ratings increases, which will slow down your page loads. Instead of letting the ratings for a movie slow down the website, we’ll move ratings to their own microservice behind Kong Gateway.
For the sake of simplicity, we’ll move the ratings
code into its own Django project (called ratings_service
) separate from the moviedb
code.
$ mkdir ratings_service && cd ratings_service/ $ python3 -m venv venv && source venv/bin/activate $ pip install django && pip freeze >> requirements.txt $ django-admin startproject ratings_service . $ ./manage.py startapp ratings $ ./manage.py migrate $ cp ../moviedb/ratings/models.py ./ratings/
The critical piece we need to copy over is the ratings/models.py
file. From there, the entire ratings
app can be removed from the moviedb
project.
However, we do need to make one adjustment to the Ratings
model now that we no longer have the hard dependency on Movie
. All we need to change is the foreign key reference to a text field in ratings.
class Rating(models.Model):
movie = models.CharField(max_length=5000)
created_at = models.DateTimeField(auto_now_add=True)
explanation = models.CharField(max_length=5000)
rating_value = models.IntegerField(
null=False,
validators=[MinValueValidator(1), MaxValueValidator(10)]
)
After making that adjustment to the model, we create and run the migration for the model changes.
$ ./manage.py makemigrations $ ./manage.py migrate
It’s important to explain why we’re making this change. Specializing in function but generalizing in operation will help you derive the most benefit from a microservices approach. In this case, our new microservice specializes in ratings management, but it's generalized such that it can take any movie title and save a rating for it. To make the ratings more unique—if there are multiple movies with the same name—we could alternatively use an EIDR number. Taking it a step further, we could even change the movie
field to something more generic and save ratings for data objects besides just movies!
There’s one last code change we need to make. With the ratings code out of the moviedb
project, we need to make an HTTP request since we can’t do a database-level join on Movies
and Ratings
. In our view for the movie list, we need to update the view class from this:
class MovieListView(ListView): model = Movie def get_queryset(self): return Movie.objects.all().select_related()
to this:
class MovieListView(ListView):
model = Movie
def get_context_data(self, **kwargs):
data = super().get_context_data(**kwargs)
# localhost:8000 is our locally running Kong Gateway
data['ratings'] = requests.get(
'http://localhost:8000',
headers={'Host': 'ratings-service.local'}
)
Note that this is technically still a blocking call since the page will wait for results. However, this approach removes the hard dependency and allows you to write logic around this specific request.
Lastly, we’ll create a small Dockerfile
in our ratings_service
to run alongside Kong in the container runtime.
# Dockerfile FROM python:3 WORKDIR /home/chad/ratings_service COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD [ "python", "manage.py", "runserver", "0.0.0.0:9000" ]
We’ll also need to set ALLOWED_HOSTS = ["*"]
in the settings for the ratings_service
to ensure the traffic is allowed through.
# ratings_service/ratings_service/settings.py ALLOWED_HOSTS = ["*"]
$ docker build -t ratings_service . $ docker run -d -p 9000:9000 ratings_service
Now that we have our moviedb
and ratings_service
separated, we need to tell Kong how to serve the traffic. We’ll add a service and route, so Kong knows how to proxy our requests:
$ curl -i -X POST http://localhost:8001/services/ -d 'name=ratings-service' -d 'url=http://host.docker.internal:9000' HTTP/1.1 201 Created $ curl -i -X POST http://localhost:8001/services/ratings-service/routes -d 'hosts=ratings-service.local' -d 'paths=/' HTTP/1.1 201 Created
Now, we can access our ratings service!
$ curl -i -X GET --url http://localhost:8000/ --header "Host: ratings-service.local" [{ "movie": "Gone with the Wind", "created_at": "2021-08-08 16:01:46", "explanation": "Classic movie!", "rating_value": 9 }, { "movie": "The Little Shop of Horrors", "created_at": "2021-08-08 16:01:46", "explanation": "I'm a mean green mother from outer space and I'm bad!", "rating_value": 7 }]
We’ve intentionally skipped over the data loading steps, as they aren’t terribly important for the sake of this example. You undoubtedly already have a wealth of data to test with; the main point here is how to pick a piece of a monolith’s functionality, extract it, and put it into its own code base, then serve it as a microservice application behind Kong Gateway.
What next?
We’ve covered a lot of ground in this complex topic of transitioning from the monolith to microservices! First, we considered some of the cases where microservices may not be appropriate for your business context. Then, we looked at the stakeholders involved in an organization’s transition to microservices, followed by the metrics an organization needs to capture to determine whether their transition was successful. Lastly, we covered some of the key practices to adopt to make this transition, walking through a concrete example of how to get started.
There is much more to cover in practice than what we could get to in this example. However, we were able to highlight the importance of incorporating a mature and resilient API gateway like the one offered by Kong. Once you’ve implemented Kong Gateway, you can also leverage the plugin ecosystem to gain more of the benefits of microservices like metrics or monitoring and enabling DevOps through deployment automation.
Opinions expressed by DZone contributors are their own.
Comments