Micro-Components Architecture
A discussion of the benefits of microservices and how these benefits can be applied to a single microservice via micro-components.
Join the DZone community and get the full member experience.
Join For FreeMicroservices
If you are in any way involved in software development, you will have heard about the recent trendiness of “microservices architecture.” In short: this architectural approach involves splitting various data flows of your system/application into small components (called “services/microservices”), and delegating to them limited roles, or ideally a single role. By combining these components into a common system you can achieve functionality equivalent to that afforded by classic monolithic architectures, but with several advantages:
- Mental complexity control: This might be the greatest advantage of microservices architecture. In short, it means that enclosing part of the system’s logic into one small component makes it significantly easier to support its code base, and handle further modifications to/improvements of this part of the system. Because of the separation of components, the overall system as a whole automatically becomes simpler.
- Easier and faster scaling: When using microservices, it’s almost always possible to scale up some part of the system to match incoming traffic. In most cases, it is enough to drop a load balancing tool amongst a pool of microservices and go on with your business tasks. For the rest of your system’s components, nothing would change: every other component would “think” about the newly-added load balancer as an instance of your microservice (that scales). Another significant advantage here is the ability to prepare for traffic spikes, and to automatically re-assign resources in case traffic goes down.
- Technological stack independence: Because microservices are independent within any given system, and communicate through a common interface (often HTTP/S), it’s possible to use a variety of technologies for different solutions. Moreover, your suite of microservices might simply be written and supported by different teams, making it possible not only to use technology that best fits your requirements, but also to decrease mental pressure on the developers and to reduce their responsibility zone ~= probability of error.
(Go/Co) Routines
Visualization of coroutines concept, taken from Kotlin
Another popular approach in modern software development is asynchronous operations processing, multi-processing/multi-threading, and various combinations of similar techniques. One of them is coroutines — a method of handling huge amounts of micro-tasks via a small amount of threads/processes.
One of the best implementations of coroutines that I know of was initially introduced by Golang (called goroutines). A somewhat similar implementation (from my point of view) is now available in Kotlin.
In short, the primary idea behind coroutines is:
You can efficiently distribute and execute a significant quantity of small-task flows on top of as many threads/processors as you can delegate at the moment.
In most implementations, coroutines are significantly lighter than traditional OS threads/processes. Sometimes, they are called green threads (but this term might overlap with threads processing). The creation and launch of a coroutine is often a very cheap operation.
For example, Go’s standard HTTP library assumes requests will each be processed in their own goroutine. Compared with similar solutions, this library shows relatively good performance, so the concept of goroutines might be thought of as battle-tested in high-load environments.
Channels
It is great to have ability to launch a huge amount of microtasks, and to be sure of their isolation. But often there are additional requirements: a need to transfer data from one task to another, to synchronize several tasks, etc.
Here we’ll introduce the concept of “channels.”
In short, channels are data-race safe communication primitives. The operations set is trivial: atomic write/read. The main purpose is to keep the code free from data races and easy to maintain.
In the most popular channel implementation libraries, there are two varieties (or modes) of channels available: buffered and unbuffered.
The difference between them is that buffered channels can be pulled with the data, even without a coroutine waiting for it. Unbuffered channels, on the contrary, can’t be written into until some other coroutine is expecting some data from it.
You can think about these two modes in the next way:
Unbuffered channels visualization
In unbuffered channels, global data flow is strictly synchronous, even if there are several independent processors.
Buffered channels visualization
In buffered channels, global data flow is async. Tasks are processed independently with some probability of data flow delays, missing data, and logic/events collision.
Events
Events are a well-known concept in software development. There is no reason to dive deeply into an explanation, so we’ll limit ourselves to an overview:
An event object represents some internal state change, and contains all relevant information about it.
For example, if you have a timer in your application, it can emit an “EventTick” periodically. This event might contain a timestamp that contains the moment it was generated, a reference to the timer itself, etc.
Internally, this event might be used to handle some other events or application’s logic.
Mixing the Magic Potion
Now, imagine your next application as a set of microservices connected by a common internal core. “Microservices” should be called “micro-components” in this case, because they are still a part of a common application.
Each micro-component runs an independent processing flow that performs a single task. For example, if your application has a network layer, you may also have Network Receiver and Network Sender components which only have the responsibility for receiving/sending data through the network. If your application has a logging layer it might also be implemented as an independent micro-component.
Each micro-component defines its own interface of outgoing/incoming events, and the internal processing flow for them. For example, the Network Receiver might define the OutgoingClientRequests channel, which would be populated with newly received requests from the users. Interfaces, as you might guess, are implemented on top of channels, so the communication flows look very obvious, predictable, and easily maintainable in this perspective.
The core’s role is to connect various outgoing channels with various incoming channels and to enable data flow between various micro-components.
In summary, the advantages of this approach are almost the same as those of a microservices architecture. But I should also mention:
- Lower risks associated with failure: In cases when a given component fails, the rest of the application stays live and is able to continue its processes. Sure, in some cases it might be better to drop the application in case of critical error. But in the modern fast, parallel and asynchronous world, processing minor errors is quite normal, and micro-component architectures makes them easy to handle.
- Strong code decomposition: Leaky abstractions are almost impossible, because developers are forced to implement some communication interfaces. That’s one more opportunity to take a step back and rethink any spaghetti code.
- Data-flow first: It is impossible for a developer to make changes to the code modifications unless he or she understands how data flows between components. Optimistically, the whole data flow would be declared in one place (the application’s core), so almost any newly-hired developer would be able to see how the data flows after their first look into the core implementation. Also, the purpose of each component is relatively easy to discern by simply looking into its events interface.
Any Eamples/Implementations?
Opinions expressed by DZone contributors are their own.
Comments