Developing Cloud-Native Applications With Containerized Databases
Learn how to use Kustomize and Tekton to provide Kube-Native automated workflows using parameters such as database operators, StorageClass, and PVC.
Join the DZone community and get the full member experience.
Join For FreeWith the advent of microservices in Kubernetes, individual developer teams now manage their own data, middleware, and databases. Automated tests and CI/CD pipelines must be revisited to include these new requirements.
This session demonstrates how to use Kustomize and Tekton to provide Kube-Native automated workflows taking into account new parameters such as database operators, StorageClass, and PVC.
The demonstration focuses on building a comics cards web application using a Flask-based frontend and leveraging MongoDB as the database.
So for people who don't know me, I'm Nic. I'm a developer advocate with Ondat. And I also happen to run DoK UK, London. So if you from the London area, don't hesitate to subscribe to the meetup group. So as usual for DoK London, we focus on hands on labs. So today what I'm going to do, I'm going to try to do a demo in 10 minutes. So I sacrificed a goat yesterday. So hopefully, it should be fine. I tested several time. So if it's not working, it's not my fault, I promise.
So the talk for today is going to be focused on developing cloud-native applications with containerized databases. But really, it's about shifting data left with Kubernetes, with shifting left in development is to bring testing and the idea to start to test your, your code and your application as early as possible in the development process. And so when you test a new application, it also means if you want to test it in context, which is the right thing to do, you need to bring two things, you need to bring the application context. But also you need to bring in from infrastructure features; you need to bring your infrastructure context to the application as well. And typically, you can bring this into networking storage, things like operators as well, because you will need your database, you need to be able to configure your database for an operator to define it in a declarative way you need so your storage as well.
So typically, you want to run a cast solution. So container attached storage, there's plenty of there, plenty of them in the market, including Ondat, of course. And then you can also bring in things like CNI, CSI, of course, Service Security. And the result of this is actually a declarative model for developers to consume infrastructure as code. But it's, it's better than bare, you know, infrastructure as code that you're used to with Terraform. Because it's Kubernetes-native, it's really native to Kubernetes. You don't need to run anything, but Kubernetes, including your pipelines, right, and this is the purpose of today, I'm going to show you how to run different pipelines, whether you're on your laptop, and you want to build your development environment, and then how can you go from there, to your production environment. And of course, you have benefits for DevOps, as I said, databases on-demand, you know, pretty much anything that is required for your application in terms of the infrastructure, reduce costs, accelerate software delivery, and finally, you end up with a better code quality.
So just before jumping into the demo, this is the picture of what I'm going to show you today. There's a lot of moving parts. But what I want to focus on is from dev laptop, you can see there's a fork here, right? So you have your application code that you commit from your developer laptop. And then there's two ways, right, so either you can make to, to your local, you know, Docker, and then push your container image to Docker Hub. Or if you want to develop to deploy in production, then you just again, you push into your production repository, and then you will rely on some sort of pipeline to deploy it into update the image, still on the same Docker repository, and then if it's production, you will want to have a tool like, like, so getups, with flux that will pick up the new manifests, and directly deploy to production. If you're on your laptop, well, you need some sort of tool to update to update the manifest to update the application, and then run it on your development Kubernetes cluster.
All right. So now I'm going to show you how to do this. There's essentially two main components you want to have when you're developing an application starting with your laptop, if you want to bring things left. So first, you need some infrastructure definition, right? And the infrastructure definition is going to tell you, okay, how you want to configure your database? Do you want to enable certain features like on the storage side? Such as, you know, replication, and encryption if you want to test performance of your application, these kind of things. So for this, usually you will use Kustomize. And the principle of Kustomize is to use overlays. Right, so you have the base overlays, that is going to define all your different manifests that are required to deploy your application.
So in our case, we're going to be looking at an application, which is pretty famous. Now, it's my Marvel application I've been showing multiple times at DoK. So it's composed of a front end, which is just a Flask application. So Python, just as the front end, and then we have database which is using MongoDB this time I said Postgres in the title but yeah, this time is going to be a MongoDB. And it's going to be using the MongoDB community operator. So here, I'm just going to show you a couple of things you want to configure.
So this is my dev environment. So you can see, I'm going to be using my dev overlay for my dev laptop. And for this, this is where I define my MongoDB configuration. So you can see the storage space, I want to allocate, you can see the roles the permission, and what is good with operators is that it's immutable. Meaning that if someone even in production is trying to change with command line, the different parameters there is going to be rewritten by the operator. So it's also a good way, using an operator is also a good way to guarantee mutability for your application configuration, your application context.
Here, for example, I'm using an Ondat storage class. And this is where I'm going to find I'm going to define like the type of the file system, so xfs, which is the recommended one for MongoDB. And if I want to have, for example, in the overlay, I want to have number of replicas equal to zero, because this is a dev environment, and maybe encrypted, I don't want to encrypt it. But for example, if I want to do some testing, when enabling replication and enabling replication, well, I just have to declaratively, change these parameters' values and then run against my development environment. That's for the infrastructure part.
Now, of course, we need some way to dynamically update things on your laptop, right when you're developing the application. So this is now my application repository. You can see here, I've got my Docker file, my Python scripts for Flask, and I've got my HTML page that is rendering my front end. And you can see here a scaffold configuration. And basically, I'm going to use scaffold, and I'm sure I mean, there are other tools on the market, like tilt that also does that. And the idea is this one. So as soon as gonna, as I'm going to save changes on my HTML page, scaffold is running in development mode, which means that as soon as I save it, is going to run the pipeline to build the image, and then use Kustomize to deploy it to my local Kubernetes cluster. So let's take a look at the Kubernetes cluster environment.
This is my laptop configuration, which is running on k3s, yes, you can see that scaffold has already deployed my application, when I started to run a scaffold in dev mode, I did it prior to the demo, because it takes some time. So now the idea is I want to modify my code and see the results directly without doing anything but saving my HTML configuration.
But first, let's just check the application. So here, I'm just gonna launch my application there in a browser, so localhost port 8080, you can see the wonderful Marvel app. And here I've got a typo, right, there's more than one comic. So I want to change this into a plural. Okay, so I'm killing this. Now, the only thing I need to do is fine comic here. And replace it with comics like this. Okay, so you can see here, it's been modified. Now, look at this, right, so here, you see, currently, it's logging, everything happened happening into scaffold. So as soon as going as I'm going to say this, right? So you can see detects a change, and now is going to build, again the image and redeploy my application. And it's the same thing. If I want to change any configuration on the infrastructure side is going to do exactly the same thing.
So now if I go back here, I should see terminating, it's already done, right. So if you see here, this one is 16 seconds. So scaffold, basically, to redeploy my application into my development cluster. Right? So now let's just check that locally, at least the change has been applied in fine to this. OK, change back. Okay, so now it's the I'm happy with my development environment. So now, imagine that you run all the tests you want to deploy into production. So remember, I'm using a single repository, and then I'm going to pick that repository with flux and flux as soon as it detects the change will apply it into my production cluster. So my production cluster just to show you here, it's there. So you can see I've got same thing, I've got my mobile application that is those two, those four containers there 41 minutes. This is when I changed them before. So you can also see like a lot of completing tasks. This is because I'm using Tekton. So Tekton is a coop native pipeline that allows you to contain your pipelines within Kubernetes. Every tasks that you apply in Tekton is just a CRD. So a custom resource definition, and everything from so you're going to combine different tasks like building your images, building your manifest, all of this will correspond to as many tasks, right, so as many Kubernetes resources, so this is why you see there.
So now I need to do a couple of things, right. So I'm still on my laptop here. The first thing I want to do is, of course, I want to commit my changes into the repository. So I'm going to commit updates, I'm going to push it but before pushing it, I want to show you a couple of things. So this is the environment where this is the repository where I have my targets manifests, that flux is going to pick up and update in my production cluster. So this manifest there will be updated by Tekton. So the role of Tekton, again, is to use this, the code of the application, and build the image and replace the image here in the manifest with the particular you know, shard, digests, etc., etc. And same thing, if I want to, this manifest represents both the application configuration, as well as the infrastructure configuration, which I can also change if I want. For example, here, again, I'm using on that as the underlying distributed storage, number of replicas to encryption is true for my production environment.
So now, I'm going to push it and then I'm going to, I'm going to, okay, it's pushed. So now I'm going to move to Tekton. And what I'm going to do, I'm going to, I'm going to manually trigger my pipeline. So you can use like, also, you know, dynamic, I mean, you can use Evans in Tekton. But good luck with this Tekton is kind of difficult. The learning curve is quite steep. Without using Evans, if you want to use Tekton Evans, it gets even worse. But yeah, so the idea here to create to launch your pipeline; remember, we are in Kubernetes. So it's again, as easy as creating a manifest. So as soon as I'm going to create that particular manifest, I'm going to launch the pipeline. And again, the pipeline is going to be to build the image to update the manifest. And then flux is going to pick up the repository of what is on the repository, and with the GitOps pipeline, update the application in the production cluster.
So just watch this, right, so I'm going to be monitoring the flux reconciliation. So at the moment, you can see the last reconciliation I had happened and what you know, some time ago, so as soon as, as I'm going to trigger the pipeline, then flux is going to pick it up. And here we should see some updates. So let's do this. Now, there's a command to monitor what's happening. So Tekton pipeline runs logs, and then it's going to chain different tasks. So it's going to take probably one minute during that time, I'm going to explain to you a couple of tasks.
So the different task is going to basically build a Docker image, again, right in the same way I've done on my laptop locally, and then push to Docker Hub; this is Tekton that is going to build the image, this time not using a Docker the local Docker process. Remember, it's running in Kubernetes. in Kubernetes, you don't want to mount the Docker sockets; that's bad because you need to do it as root. So typically, you use something like Kaniko, which is just user space to build the image. And actually on the Tekton marketplace, or library, you can find a lot of useful tab main tasks that are pre-built for you. And actually Kaniko is one of them, right? So build the image. And then from the image, I'm going to be using what we call a workspace, which is represented as a volume in Tekton. And that workspace is going to be transferring data from one task to another. So here I'm going to be transferring the image ID into my manifests and as a side effect as well, Tekton as a limitation which allows you to, I mean, which limits you to one single word, I mean, one volume between all the tasks. But because we are running Ondat here, we kind of removing that limitation, because you can run your volume on any node, regardless of where you put the pod consuming that volume is. So basically, we help you run Tekton, which is great.
So here, it's done. So the manifest has been updated by my last task here. So if I go back into my repository, here, I should see an update 31 seconds ago. So now what has changed basically, here, I kept the same infrastructure environment, the only thing that's changed is the image ID here. So now I should see flux that is trying to reconcile. And here reconciliation there applied version. So it should be already there. If we go back to the production cluster, you should see now 23 seconds, right 25 seconds. So effectively, it's redeployed the whole application within production with the production configuration, right?
So the last step is to check this now. I'm going to run again, the application but this time from the production perspective. So let's do this. I think that's on the screen here. So that's the moment of truth. Did it work? Okay, it works. So now I've changed my production environment from commic to commics with an S. And that concludes our demo. So just to conclude, I hope you enjoyed this demo. And the goal was really to show you that actually developing an application from your laptop to production only take what 10 minutes. That's about it, right? You just have to learn Kubernetes. So thank you for watching, and I'll see you next time. Bye.
Published at DZone with permission of Sylvain Kalache. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments