Microservices Anti-Patterns
In this post, a developer talks about the microservices antipatterns that he's seen while working with clients of all sizes.
Join the DZone community and get the full member experience.
Join For FreeMicroservices is a silver bullet, magic pill, instant fix, and can't-go-wrong solution to all of software's problems. In fact, as soon you implement even the basics of microservices all of your dreams come true; you will triple productivity, reach your ideal weight, land your dream job, win the lottery 10 times, and be able to fly, clearly.
While this sounds like a lot of hyperbole wrapped up in some BS, if you have been listening to anything around microservices recently you will most likely have heard something not too far from this exaggerated sentiment — especially if it is coming from sales folks.
As a result of this, you or someone you know will likely have been charged by management to implement a solution in microservices or refactor an existing application to take advantage of microservices to ensure that you get all the magic. With so much overinflation of the truth out there, chances are you may have also implemented a microservices antipattern. These antipatterns are actually more common in the wild than fully functional microservices architectures.
Overview
In this post, we will cover the most common antipatterns that I have witnessed in the wild:
- Break the Piggy Bank
- Everything Micro (Except for the Data)
- We are Agile! a.k.a. The Frankenstein
Each one of these results from a common misconception. We will do our best to define these patterns and their symptoms. After each, we will also show a way out of the mess so that you can recover and begin to move towards a better implementation. Let's get started!
Break the Piggy Bank
This anti-pattern is one of the most common when refactoring an existing application to a microservices architecture. When applications start out as monolith applications and grow over time, they eventually get so large that even the basic parts of the SDLC become excruciating tasks:
- Deployments can take several hours, if not days and are often very high risk.
- Maintenance becomes part engineering rigor, part archaeology.
- Performance starts to be measured in "how many days since a sev 1 outage" signs.
- Regression testing requires teams, automation, dedicated data centers, and an entirely new software organization.
When an application is in this state, it is easy to think about it like a pig. The monolith has become untenable, and now is a prime candidate for microservices.
Microservices addresses deployment, maintenance, performance, and testing by breaking down the large code base into smaller, decoupled services that communicate through "dumb pipes" (most commonly the HTTP protocol).
When a team is first introduced to this, especially when dealing with a pig, the first inclination is to smash the whole thing up into little pieces, like a piggy bank with a hammer. Because, of course, 1000 services will be way better than one bloated one, right? Not so fast. The breaking up of a monolith into its parts is not as simple as just a smash.
The problem with this method is that without intentional division of services either by domain, unit of work, or potential for change, you end up creating several mini-monoliths. Often times, services are decomposed too granularly and are not really separate, so they end up having to be deployed, scaled and maintained together.
The other issue that this exacerbates is the complexity of the overall application. With code, complexity is neither created or destroyed when moving from a monolith to microservices, it simply changes from in-proc service calls to now HTTP(s) calls that add a whole another layer. Orchestration, distributed transactions, service discovery and recovery are just a few new concerns that the application developers need to consider, yay.
A Way Forward
If you find yourself in this situation, all is not lost. There are ways out but they will require some work. The first thing to do that will help all of the issues caused by a "piggy bank smash" is to do some analysis on your application and try to recompose service boundaries based on domains.
This will usually result in the combining of several services that are not really separate and more clearly defining your entities and value objects. We will not dive into DDD in this post, but it is very helpful for refactoring applications that have been broken into too many pieces. For more information check out the following link:
https://en.wikipedia.org/wiki/Domain-driven_design
Once you have your reasonably sized and composed services, implementing automation will ensure that you can resolve deployment and build issues for the long haul. If you have not already implemented a CI/CD pipeline then now is the time. With multiple options out there from Jenkins to TFS, you can find a CI/CD platform that will help enable your automation efforts. From there, things like Ansible, SaltStack, Chef/Puppet, and others can help you automate infrastructure as well. The idea with all of this is to take the human element out of repetitive tasks in as many areas of the stack as possible. This way, if something goes wrong with your newly composed services you can quickly recover, or in some cases completely rebuild your application and dependencies with an automated process. This becomes a manageable task if you have a handful of services, and can be burdensome if you have 1000 services.
Even if you do not fully automate your entire solution and do not follow DDD principles to the letter, recomposing your application will result in a much more effective solution that will be maintainable and usually ease deployment woes. In this antipattern, the key to getting out of it is to ensure that you are not creating services on arbitrary boundaries, but instead dividing by units of work/domains so that they can truly be deployed, scaled, and maintained separately from the rest of the application.
Everything Micro (Except for the Data)
This is another extremely common antipattern, especially in enterprise organizations. Most often this design arises out of some form of a cap-ex spend on a data center, RDBMS purchase, or existing data team that is uncomfortable with data in the cloud and/or micro-data in general.
The key sign of this antipattern is that just about everything in the application space is decomposed reasonably, there is a mature CI/CD process in place, maybe even some distributed design patterns are in place. However, there is one giant data store behind all of the microservices. It may be HA and fully replicated, but it is still a monolithic data structure.
These are most common with Microsoft's SQL Server, Oracle, and DB2 data stores mainly because their licensing models do not lend themselves easily to have a database per service implemented in the wild.
The challenges with this approach are more subtle than other antipatterns, because often times this may not have a noticeable negative impact on the application until later in its life cycle. Keeping track of data/schema changes is a common challenge with this setup because any potential changes to a production database may require a full database outage, or they may cause locks and blocks if the system is still live when changes are executed.
These are usually solvable issues that require detailed governance and approval processes to alleviate issues. SLAs are also at the most risk with this antipattern because of the aforementioned potential for outages while changes are executed. With large data stores, the access control can become extremely complex as applications are usually restricted in what schemas/functions they can affect as well as what activities they can perform, so as not to affect other applications on the same data store.
With multiple applications accessing the same data store, resource contention eventually becomes the main issue. As applications grow and expand, so does the data footprint. Data archival, cleanup, and tuning become crucial tasks to keep everything in balance. Also, the size of a data store is often associated with cost — so very large instances can eventually become cost prohibitive to operate.
With a monolithic data store, vertical scale and some sort of clustering become the only feasible way forward to address performance issues. Nothing is more fun than trying to figure out how many flash I/O cards you can jam into a database server before your application crashes and locks up the database entirely!
A Way Forward
Unfortunately, the way out of this predicament usually involves changing some hearts and minds in your organization. In order to move away from a monolithic data store, you have to start small. If your application is composed with domain bounded context already, it will be easier to take a piece of that and put it into another, single-purpose data store. If you have not implemented bounded context, it can be a great way to help define those boundaries in your application.
For more information, please see the Keyhole blog post on implementing bounded context:
Implementing A Bounded Context
Depending on your platform of choice, there are often several data stores to choose from: either SQL based ones, document-based one, or even simple blob storage can do the trick. The key is to use a data store that matches best with the function your application is trying to perform.
If you have a specific domain in your application that is focused on user profile data, for example, a normalized SQL structure might not be the best fit. A No-SQL data store that is flatter, and object-based may be the best fit as the shapes can be flexible and not require a data model update as their contents change and evolve. Pulling out that specific data and migrating it from your monolithic database to the No-SQL one can be accomplished in a number of ways. Before this can be accomplished the organization needs to be convinced that the new data store will be effective and supported.
A couple of ways of implementing can address these concerns for example, if the application is young enough, a simple lift-and-shift should suffice. For larger data sets, a gateway of some sort in front of the data calls can be employed to leverage a lambda pattern and use normal traffic in the system to fill up the new data store over time. Either way, starting with a defined domain or section of data will allow you to gradually get your application data out of the monolith.
We Are Agile! a.k.a. The Frankenstein
This last antipattern we will look at occurs when teams begin the shift from Waterfall software development to Agile software development. At the beginning of these process shifts, teams usually end up implementing some version of agile-fall. Often this is hallmarked by the saying "We are agile, so we do not have to plan things anymore!" This is in reaction to the heavy design documents, project plans, and Gantt timelines that were standard in the Waterfall methodology.
The misconception that 'agile' means the team no longer has to plan things out and can be nimble and adapt to customer needs (read: whims and shiny things) results in functionality being decomposed in a vacuum. What does the customer/client/product owner want? Well, that is what we will build this Sprint!
With enough cycles through this process, you end up with several disparate pieces of functionality, often implemented in the same or similar ways as something previously built that all have to be bolted together to share data and create some semblance of a cohesive application. This ultimately creates a Frankenstein software monster that is just a bunch of parts sewn together and gets worse over time.
This antipattern becomes self-perpetuating because as time goes on it becomes more complex to deploy and bolt on new things, especially if they have to interact with existing parts. This can result in technical debt that seems to balloon and often rears its head in the form of some undesirable but hidden behavior like lost transactions, orphaned instances, and unexplained slowdowns.
Eventually, Frankenstein will start to fall apart at the seams and more effort will be expended just trying to keep things together than on actually developing new functionality.
A Way Forward
To beat Frankenstein in this instance, you need to take some cycles to define what your application does and map that to the concrete implementations in your code. You will most likely find some duplication or code paths that are never touched. Eliminating this cruft is a good first step.
Then, to move forward, focus on the contracts for the interfaces in your system. Every time a service calls another service or a data resource, define that contract in something like Swagger to help bring light to what the system is actually doing.
Once these two activities have been completed, the next step involves working with your Project Manager or Agile Coach to help redefine how features are implemented in the application. Shoot from the hip should no longer be considered. Initially, a small design spike at the beginning of the feature will help to ensure that the functionality is not being duplicated, is using good coding practices/design patterns, and leveraging existing code where appropriate. This activity is usually performed by a senior developer or a technical architect, even if the implementation is completed by another team member. This will help ensure that all code is being purposefully added to the system in a manner that extends functionality without incurring additional technical debt.
The final part to move away from the monster is to begin to add in time or Sprints to address the technical debt that has been built up. This can be done as a percentage of each sprint, or in lump sprints if there are requirements lulls. Either way, incorporating some garbage collection and clean up will begin to put the toothpaste back in the tube and help the team and application move forward.
Summary
In this post, we talked about the microservices antipatterns that I have witnessed working with clients of all sizes. The ones we talked about here were:
- Break the Piggy Bank
- Everything Micro (Except for the Data)
- We are Agile! a.k.a. The Frankenstein
After each, we also tried to give some hope and show a path forward to help correct the mistakes of each.
I hope you enjoyed this at least as much as blue/green deployment with no sev 1 issues!
Published at DZone with permission of Dallas Monson, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments