Blockchain Case Using Kubernetes
This article shows how we used Kubernetes and microservices architecture to build a crypto solution with complex functionality.
Join the DZone community and get the full member experience.
Join For FreeAs an illustration of how Kubernetes is being used, I propose to consider one of our cases. We are going to talk about the application that was developed for the cryptocurrency market. Yet, the technologies used for this app are utilitarian and can be used for other projects as well. In other words, a tech task is a very general solution and mainly adjusted specifically to the Kubernetes and can be used in other industries as well.
Technologies That We Used
The project started as a start-up and had a limited budget. The client paid attention that we will have regular demos for investors where we should regularly show progress in developing new features. Thus, we decided to use such technologies as:
- Node JS (NestJS framework)
- PostgreSQL database
- Kafka JS
- Kubernetes(k8s) + Helm charts
- Flutter
- React
Development Process
For the first stage, our main purpose was to split our application into microservices. In our case, we decided to create 6 microservices.
- Admin microservice
- Core microservice
- Payment microservice
- Mails and notifications service
- Cron tasks service
- Webhooks microservice
It is worth mentioning that while the tech stack is utilitarian and can be used in various cases without changes, the foregoing microservices - are not. They were created specifically to serve the required features in our project. Thus, you can use the same technologies but will have to design new microservices according to your needs.
Let’s find out how to make these microservices on NestJS. We need to make configuration options for the Kafka messages broker. Therefore, we created a shared resources folder for the common modules and configurations of all microservices.
Microservices Configuration Options
import { ClientProviderOptions, Transport } from '@nestjs/microservices';
import CONFIG from '@application-config';
import { ConsumerGroups, ProjectMicroservices } from './microservices.enum';
const { BROKER_HOST, BROKER_PORT } = CONFIG.KAFKA;
export const PRODUCER_CONFIG = (name: ProjectMicroservices): ClientProviderOptions => ({
name,
transport: Transport.KAFKA,
options: {
client: {
brokers: [`${BROKER_HOST}:${BROKER_PORT}`],
},
}
});
export const CONSUMER_CONFIG = (groupId: ConsumerGroups) => ({
transport: Transport.KAFKA,
options: {
client: {
brokers: [`${BROKER_HOST}:${BROKER_PORT}`],
},
consumer: {
groupId
}
}
});
Let’s connect our Admin panel microservice to Kafka in consumer mode. It will allow us to catch and handle events from topics.
Make the app work in microservice mode for the ability to consume events:
app.connectMicroservice(CONSUMER_CONFIG(ConsumerGroups.ADMIN)); await app.startAllMicroservices();
We can notice that the consumer config includes groupId. It’s an important option that will allow consumers from the same group to get events from topics and distribute them to each other to process them faster.
For example, suppose our microservice receives events faster than it can process them. In that case, we can make autoscaling to spawn additional pods to share loading between them and make the process twice faster.
To make this possible, consumers should be in the group, and after scaling, spawned pods will be in the same group too. So, they will be able to share loading instead of handling the same topic events from different Kafka partitions.
Let’s check an example of how to catch and process Kafka events in NestJS:
Consumer Controller
import { Controller } from '@nestjs/common';
import { Ctx, KafkaContext, MessagePattern, EventPattern, Payload } from '@nestjs/microservices';
@Controller('consumer')
export class ConsumerController {
@MessagePattern('hero')
readMessage(@Payload() message: any, @Ctx() context: KafkaContext) {
return message;
}
@EventPattern('event-hero')
sendNotif(data) {
console.log(data);
}
}
Consumers can work in 2 modes. It receives events and processes them without returning any response (EventPattern decorator) or returns the response to the producer after processing an event (MessagePattern decorator). EventPattern is better and should be preferred if possible as it doesn’t contain any additional source code layers to provide request/response functionality.
What About Producers?
For connecting producers, we need to provide producer configuration for a module that will be responsible for sending events.
Producer Connection
import { Module } from '@nestjs/common';
import DatabaseModule from '@shared/database/database.module';
import { ClientsModule } from '@nestjs/microservices';
import { ProducerController } from './producer.controller';
import { PRODUCER_CONFIG } from '@shared/microservices/microservices.config';
import { ProjectMicroservices } from '@shared/microservices/microservices.enum';
@Module({
imports: [
DatabaseModule,
ClientsModule.register([PRODUCER_CONFIG(ProjectMicroservices.ADMIN)]),
],
controllers: [ProducerController],
providers: [],
})
export class ProducerModule {}
Event-Based Producer
import { Controller, Get, Inject } from '@nestjs/common';
import { ClientKafka } from '@nestjs/microservices';
import { ProjectMicroservices } from '@shared/microservices/microservices.enum';
@Controller('producer')
export class ProducerController {
constructor(
@Inject(ProjectMicroservices.ADMIN)
private readonly client: ClientKafka,
) {}
@Get()
async getHello() {
this.client.emit('event-hero', { msg: 'Event Based'});
}
}
Request/Response-Based Producer
import { Controller, Get, Inject } from '@nestjs/common';
import { ClientKafka } from '@nestjs/microservices';
import { ProjectMicroservices } from '@shared/microservices/microservices.enum';
@Controller('producer')
export class ProducerController {
constructor(
@Inject(ProjectMicroservices.ADMIN)
private readonly client: ClientKafka,
) {}
async onModuleInit() {
// Need to subscribe to a topic
// to make the response receiving from Kafka microservice possible
this.client.subscribeToResponseOf('hero');
await this.client.connect();
}
@Get()
async getHello() {
const responseBased = this.client.send('hero', { msg: 'Response Based' });
return responseBased;
}
}
Each microservice can work in any of two modes(producer/consumer) or both modes(mixed) at the same time. Usually, microservices use mixed mode for load balancing purposes, generating events to the topic and consuming them evenly, sharing the loading.
Kubernetes configuration based on Helm chart templates, implemented for each microservice.
Admin API microservice components and their structure described by Helm charts
The template consists of a few configuration files:
- deployment
- hpa(horizontal pod autoscaler)
- ingress controller
- service
Let’s look at each configuration file (without Helm templating)
Admin-API Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-api
spec:
replicas: 1
selector:
matchLabels:
app: admin-api
template:
metadata:
labels:
app: admin-api
spec:
containers:
- name: admin-api
image: xxx208926xxx.dkr.ecr.us-east-1.amazonaws.com/project-name/stage/admin-api
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
- name: APP_PORT
value: "80"
Deployment can contain more thin configurations like resource limits, health check configuration, update strategy, etc. However, We provide a basic configuration example, which can be extended depending on the needs of any other project.
Admin-API Service
---
apiVersion: v1
kind: Service
metadata:
name: admin-api
spec:
selector:
app: admin-api
ports:
- name: admin-api-port
port: 80
targetPort: 80
protocol: TCP
type: NodePort
We need to expose service to the outside world to use it. Let’s expose our app via a load balancer and provide SSL configuration to use a secure HTTPS connection.
We need to install a load balancer controller on our cluster. Here is the most popular solution: AWS Load Balancer Controller.
Then, we need to create ingress with the following configuration:
Admin-API Ingress Controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: default
name: admin-api-ingress
annotations:
alb.ingress.kubernetes.io/load-balancer-name: admin-api-alb
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/tags: Environment=production,Kind=application
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:xxxxxxxx:certificate/xxxxxxxxxx
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/group.name: admin-api
spec:
ingressClassName: alb
rules:
- host: example.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: admin-api
port:
number: 80
After applying this configuration, a new alb load balancer will be created, and we need to create a domain with the name which we provided in the ‘host’ parameter and route traffic from this host to our load balancer.
Admin-API Autoscaling Configuration
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: admin-api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: admin-api
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 90
What About Helm?
Helm becomes very useful when we want to reduce the complexity of our k8s infrastructure. Without this tool - we need to write a lot of yml files before we can run it on a cluster.
Also, we should keep in mind relations between applications, labels, names, etc. However, we can make everything simpler with Helm. It works similar to the package manager, allowing us to create a template of the app and then prepare and run it using simple commands.
Let’s use helm to make our templates:
Admin-API Deployment (Helm Chart)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.internalPort }}
{{- with .Values.env }}
env: {{ tpl (. | toYaml) $ | nindent 12 }}
{{- end }}
Admin-API Service(Helm Chart)
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.global.appName }}
spec:
selector:
app: {{ .Values.global.appName }}
ports:
- name: {{ .Values.global.appName }}-port
port: {{ .Values.externalPort }}
targetPort: {{ .Values.internalPort }}
protocol: TCP
type: NodePort
Admin-API Ingress(Helm Chart)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: default
name: ingress
annotations:
alb.ingress.kubernetes.io/load-balancer-name: {{ .Values.ingress.loadBalancerName }}
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/tags: {{ .Values.ingress.tags }}
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.ingress.certificateArn }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-path: {{ .Values.ingress.healthcheckPath }}
alb.ingress.kubernetes.io/healthcheck-interval-seconds: {{ .Values.ingress.healthcheckIntervalSeconds }}
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/group.name: {{ .Values.ingress.loadBalancerGroup }}
spec:
ingressClassName: alb
rules:
- host: {{ .Values.adminApi.domain }}
http:
paths:
- path: {{ .Values.adminApi.path }}
pathType: ImplementationSpecific
backend:
service:
name: {{ .Values.adminApi.appName }}
port:
number: {{ .Values.adminApi.externalPort }}
Admin-API Autoscaling Configuration (Helm Chart)
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "ks.fullname" . }}
labels:
{{- include "ks.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "ks.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
Values for templates are located in “values.yml”, “values-dev.yml”, and “values-stage.yml” files. Which of them will be used depends on the environment. Let’s check examples of some values for dev env.
Admin-API Helm Values-Stage.yml File
env: stage
appName: admin-api
domain: admin-api.xxxx.com
path: /*
internalPort: '80'
externalPort: '80'
replicas: 1
image:
repository: xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/admin-api
pullPolicy: Always
tag: latest
ingress:
loadBalancerName: project-microservices-alb
tags: Environment=stage,Kind=application
certificateArn: arn:aws:acm:us-east-2:xxxxxxxxx:certificate/xxxxxx
healthcheckPath: /healthcheck
healthcheckIntervalSeconds: '15'
loadBalancerGroup: project-microservices
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
env:
- name: NODE_ENV
value: stage
- name: ADMIN_PORT
value: "80"
To apply the configuration on the cluster, we need to upgrade the chart and restart our deployment.
Let’s check the GitHub Actions steps that are responsible for this.
Apply Helm Configuration in GitHub Actions
env: stage
appName: admin-api
domain: admin-api.xxxx.com
path: /*
internalPort: '80'
externalPort: '80'
replicas: 1
image:
repository: xxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/admin-api
pullPolicy: Always
tag: latest
ingress:
loadBalancerName: project-microservices-alb
tags: Environment=stage,Kind=application
certificateArn: arn:aws:acm:us-east-2:xxxxxxxxx:certificate/xxxxxx
healthcheckPath: /healthcheck
healthcheckIntervalSeconds: '15'
loadBalancerGroup: project-microservices
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
env:
- name: NODE_ENV
value: stage
- name: ADMIN_PORT
value: "80"
Summary
Eventually, we examined how to build microservices with the use of Kubernetes on a specific case. We have clearly skipped other must-have steps and components, transforming code samples into a full-fledged working app. However, the foregoing source code is enough to show and explain how exactly Kubernetes microservices are built.
Published at DZone with permission of Tetiana Stoyko. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments