Kubernetes: Persistent Disk or StatefulSet?
When and why to use Persistent Disk and StatefulSet with Kubernetes
Join the DZone community and get the full member experience.
Join For FreeKubernetes users are confused about when one should make a Deployment with a PVC and when they should use a StatefulSet with a PVC. There is also a general lack of understanding regarding disk access policies, what RWO/RWX means, and what they allow you to do. These concepts are complicated and require a deep level of understanding to avoid users making bad decisions that they come to regret later.
In the talk, Portainer co-founder Neil Cresswell explores when one should use each type and what should be considered before deciding: from disk access policy, understanding what RWO really is and how RWX disk access changes the equation for persistence.
Bart Farrell 00:00
Getting some love in. So let’s get this moving. We’re going live and it is super good to be back and welcome everyone to the Data on Kubernetes Community after a brief hiatus, we are back in full force with tons of live streams, we’re getting close to livestream number 100. We are live stream 98, because of some changes in other live streams that we added between when we scheduled this with me on or having it you see 96 and 97. But this is in fact livestream number 98. We are getting close to 100. very stoked about that. A couple of updates just for new folks. We had a research report come out around Kubecon, feel free to check that out (KubeCon + CloudNativeCon North America 2021). Lots of interesting information there. We interviewed over 500 Different organizations to see how they see the challenges and the opportunities of working with data on Kubernetes running stateful Workloads, Databases StatefulSets, Persistent Volumes, Persistent Disk, some of the stuff we’re going to be talking about today with our honored guests who are Neil Cresswell, the CEO of Portainer, normally based in New Zealand, currently joining us from New York.
Bart Farrell 01:06
So like I said, you’re originally from New Zealand, can you tell us a little bit about your background and how you ended up in this cloud-native space, as a part of the data involved in how Portainer got started?
Neil Cresswell 01:17
I’ll keep it brief because I can go on forever talking about this. But I am a career consultant in the infrastructure space primarily and so I have had a lot of experiences with VMware and enterprise storage in the past, so I made the transition into containers around five years ago, I think, with the very early stages of Docker and trying to deliver a cares-type solution to the markets through a public cloud provider that I was running at the time. And so got exposed to Docker Swarm in the early days and trying to deliver this service out to the market, the market says what is this API endpoint? It’s meaningless to me where do I log in and deploy things? Where is the web UI? How do I pass this to my users? And at that point, there was nothing around, we hunted and looked and tried to find solutions, that would be a user self-service, front end for containers, and there was no such thing. So I ended up creating Portainer. So Portainer has evolved dramatically over the last four, five years from a really simple Docker UI, now to a full container-based application management tool. And our goal now is to make container technology available to everybody. So not just those who can afford to retrain themselves on Kubernetes and all of the other ceramic technologies, but as long as you have Docker on your machine, you should be better off using Portainer to do anything.
Bart Farrell 02:55
And you were also at Kubecon, what was your experience like there?
Neil Cresswell 02:59
Kubecon was awesome. It would have been way better if there were 25,000 people there, but with the three and a half thousand, It was not a cramped experience. So there was plenty of time to speak to people and get to know people and it seemed to me everyone was quite comfortable. And a lot of people were comfortable, shaking hands and spending time chatting. So it was just a really good experience being able to meet people face to face for probably two years now. It was the last time I had an actual physical event. Some really good connections were made. And I’m looking forward to following up with a decent number of people in the coming weeks.
Bart Farrell 03:36
I completely agree, it’s an amazing experience. Keeping in mind that everybody who knows Portainer is already thinking about this too when Kubecon ends, we start planning the next one. And so the CFP for the next Kubecon is already out and available. So if you want to apply for that I can drop a link later on. But there’s going to be tons of activity going on in Valencia, Spain where that will be held next year in May, I believe it starts on the 17th and finishes on the 20th Get on Kubernetes community will of course be there we’re already making plans to be there to have an on-site event still debating whether or not it’s going to be virtual or hybrid hoping that we can go for hybrids to get folks together, physically speaking. But anyway, more news will come out relatively soon. That being said, the title of today’s talk is Persistent disk or StatefulSet. Where did you get the interest in this topic seeing the dilemma that could be going on deciding what’s best if you’re going to be making a deployment?
Neil Cresswell 04:32
I’m spending a lot of time out there in the Kubernetes communities now if you think about where Portainer came from, we were known as the Docker guys. And for us to rebrand, reposition ourselves, and be relevant in the Kubernetes community. It means we need to exist in that community. So I’ve joined as many communities as I possibly can. And I’m engaging with people who are trying to get started with Kubernetes for the very first time. And what I came across is a sheer lack of understanding. And I don’t know whether it’s because they don’t have an underlying foundational knowledge of containers in general, they’ve just started from nothing to Kubernetes. I don’t know what it is. But there was a sheer lack of foundational knowledge of containers and statefulness. And the fact that containers by default, aren’t stateful in any way, shape, or form, and how to make them stateful isn’t as simple as you might think. And there were a lot of people just making flawed assumptions and saying, I’m trying to do X, I’m getting Y, and I can understand why I’m quite wide, it isn’t delivering what I wanted to do. And I was injured. It was interesting, having arguments with people saying, “look, you can’t do this I know, I know.” It seems like it can, but it can’t. And people are like well, “no”, it should just work. So I thought it would be a really good topic to discuss to say how you should achieve persistence, the right way, and make sure people are aware, clearly of the ways that you can do it, but probably shouldn’t do it.
Bart Farrell 06:03
Very practical hands-on knowledge. Because like you said, a lot of people, it seems like, okay, we’re just gonna jump into Kubernetes, in the same way, that some people rushed into the Cloud without really understanding all the implications that came along with it. And we find that that’s something too in our community, a lot of times people have the association of Kubernetes, as stateless, everything’s stateless. And then we’re trying to turn that conversation around, saying, these are the cases in which there’s a lot of value being provided by running stateful application stateful workloads. So that being said, I don’t want to take any more of the insurance so you can start sharing your screen, folks, feel free to ask questions in the chat, either here on YouTube or in Slack. We’re not able to address all the questions in the live stream, we can continue the conversation in Slack later. So Neil, if you want to share your screen go for it.
Neil Cresswell 06:49
I’m not going to bore people with too many slides. I’m going to Alt-tab between slides and live demos. So please pray to the demo gods for me.
Bart Farrell 07:01
What we have to add I think the demo gods will be on your side because you can just explain. You dealt with an internet outage recently, which provided you the opportunity to explore different parts of New York.
Neil Cresswell 07:11
The building that we were working from there’s construction and just before we went live the internet went down and they were unable to restore it. So we literally ran to a WeWork and so I’m currently sitting in a WeWork and graciously they gave us the conference room for free for the next hour. So shout out to the WeWork.
Neil Cresswell 07:35
I’m just starting with what stateful applications are now. It’s really interesting to understand what they genuinely are. So generally there are applications that need to persist the data whilst they’re running, but also through the life cycling of the container-based application. Now, what that means is, restarts rescheduling, either rescheduling on the same host or rescheduling across hosts, updates, redeployment. So the data needs to be persisted. Also, stateful applications genuinely need a persistent identity. And what I mean by that is a hostname. So they generally need a consistent hostname. And they also expect to be started and stopped in a certain order. It’s very, very difficult to have an application landscape that just starts and stops randomly, that the database server really should start before the web servers, not the other way around. And so you must understand how to start and stop the applications in the order that they’re going in. But what it is not is just a Persistent Volume. So a lot of people think I make my applications stateful just by attaching a PVC. And it’s so much more than that.
Now Kubernetes handles state in a couple of ways. So obviously, the data is held in a volume. And it’s connected through a volume claim. But you must make sure that the CSI storage driver you’re using makes that volume available across all nodes in the cluster. If you don’t, if you’re just using something like a host pack or something similar like that, if the container is rescheduled or the pod is rescheduled on another node, the volume will be recreated and it’ll be blank of data. So just because you have a persistent volume, if your CSI driver is not cluster scoped, then be careful your data might not be there when the actual pod is rescheduled somewhere else. Kubernetes also handles states by allowing you to create hostnames that are predictable. And this is particularly evident in StatefulSets and I’ll give you an example of that later but without the hostnames can change all over the place. And also the start and stop order for pods can be made productive Through Kubernetes state management. So this is how the four things it uses are handled anyway. So how can you deploy a stack of applications? Well, there are two ways. And which way you choose depends on your application. And your applications need a state. So the degree of state of needs. So you can go and use a regular old Kubernetes deployment, and you can attach a PVC, and you’ve got yourself a volume and that volume will be persisted, and there’ll be data written to that volume, and you can delete and recreate your application. And as long as you can connect to that volume, you’re all good. But the problem with that is, you will get random hostnames. And you’ll have a random Start top order as well. So then there’s the option of a Statefulset, again with a PVC. And this is used if you need to persist your data. And your application needs consistent hostnames and the predictable Start Stop order. So these are the two options.
Now a lot of people think I could just use the deployment PVC, and that’s good enough it works. It persists in the data. But that’s not a truly stateful application. So it all seems very simple, you just use one of those two options, the deployment or StatefulSet. But there are complications here. Now, the storage access policies are where people get confused, from what I’ve seen anyway. So there are multiple access policies ReadWrite, once read, ReadOnlyMany, ReadWriteMany. But the two most common are RW/RWX. Now, just because your storage supports AWS, and by that, I mean something like NFS, does not mean your application does. So just because the storage supports it, it doesn’t mean your application supports multiple read rights. An admin can manually set the storage access policy to something of their choosing. So you can go and choose a storage drive or let’s say the AWS elastic block store that doesn’t support ReadWriteMany, the admin can say it supports ReadWriteMany, and Kubernetes will go okay. But then it will fail later on and strange things will happen. So even though you can set the access policy to anything you like, don’t. Make sure you only set it to the things that are supported.
Now, why is this important? Because unless your application supports concurrent access to its data, you will likely end up in data corruption land. So you’ll get this thing called last change, the ones where you’ve got two pieces of software writing to the same volume. If these two are right, write a file of the same name, first of all, write the file, the second one will write the file and the second one will override the first one. The first one has no idea that that’s happened. If you are using applications that use disk locking technologies, like databases, you’ll often get a locking violation. So the first one will start, it’ll get a lock on the database, the second one will start it’ll try and get a lock on the database, it’ll fail and it’ll come up with a locking violation, it’ll sit there and an infinite retry loop, trying to get a lock on the database and will fail. Now as an example, host path, the host path driver does not support AWS, but you can set if you can deploy an application like MySQL with two persistent volumes, and two replicas, sorry with a persistent volume and two replicas. Now, this is what I see most commonly requested in the Kubernetes community, especially the Kubernetes Facebook user groups. People say I’ve got NFS storage. I’m trying to deploy MySQL with two replicas. And it’s failing why my NFS supports the RWX axis. Why is this not working? This is a really common use case. And if you do this, what you end up with is exactly as I said before, you’ll have one of the replicas working perfectly fine, you’ll see on the left-hand side and one of the replicas failing to lock as you see on the right-hand side.
Now I’ll just show you this here in real-time. So I have a Kube cluster here. I have the cluster configured with host pass storage and it’s a Wi-Fi access policy. I’m going to manually set it to a WX here. If I do this now, I can come in and say create an application. So test app, MySQL 5.7. I’m just gonna persist with the MySQL 20 gig. It’s gonna be shared and I want two replicas of this. And I’m going to deploy it. And this will now go and deploy two copies of the database. With persistence, just a single persistent volume, they both run, just choose one randomly.
So there you go, there’s the first one here. So this one is open and connected fine. If I go to the second one now and go to logs, there’ll be locking errors. So it’ll be unable to lock, you go check that you don’t have another MySQL person using it able to lock. So even though it lets you deploy it, because I’ve manually overridden the storage to say, support multiple rewrites, both of them are running from Kubernetes perspective, they’re both running, but MySQL is sitting there in an infinite loop saying, “I can’t connect, I can’t get a lock.” So one of them works, the other one doesn’t. And unless you know what you’re doing, you will look at this and say, “cool,” both of my pods are running, it’s working perfectly fine, I’ve got it, I’ve got it load-balanced across my database. It’s only if you come in and interrogate you’ll see that it isn’t working as expected. So this is one of the things that I see common, where people just say I’ve got NFS storage, I can’t understand why I can’t have two instances of MySQL accessing the same data and load balance between them.
Now, what if we try the same thing, but on something like DigitalOcean, which has DigitalOcean blob storage. Now, this does not support AWS, it only supports singular, right. So if you try and manually set it to AWS, you’ll get a different error. So unlike with the host path storage, where it lets you do it, block storage simply will not allow you to do it. So if I try to manually set it to RWX and I try to deploy a multi-pod application that is accessing the same persistent volume, I’ll get this multi-attach error, because it simply will not let you do it. So that should be pretty clear. Now, if you want to persist your data make sure that you don’t play around with your access policies. And make sure that you have a replica of your data or a volume for each partner that you want to deploy. So each one can persist their data. And then you have to take into account things like MySQL replication if you need to replicate the data, MySQL multi-master, or master-slave configurations to replicate the data.
But what happens if you want to persist things other than the state, so you have a state other than storage, I mean, so you hit C here if you use a deployment type, the hostname is that the pods and therefore the containers are given a randomly generated string. And they will run across the nodes of the cluster happily. But if you put a node into drain mode, or a node fails, and it’s rescheduled, I had five pods running. So five containers here, and the bottom line is the hostname. It’s running on nodes. If I put that SK node in the drain, it’s rescheduled and spun up on another node. And I’ve lost that hostname. So the hostname is now randomized again, and unless your application is completely okay with it having a random hostname each time, you’re going to get yourself into trouble. So if you need to have the hostname state-maintained, then a deployment type is not going to be your friend here.
Now, most applications probably don’t care about the hostname, they just ignore it, you can use localhost or whatever else. But if it matters, then you have to know that deployment will not persist your hostnames at all. So if you use StatefulSet as an example, they are always given the same consistent numbering. So from the first replica all the way through to the last replica, they’re given very, very consistent numbering and that numbering directly translates to the hostname. And if I was to take a node offline, you can see here the node would reschedule on another host but would keep the same name. Now let me show you that.
So if I go to this node here, this is a DigitalOcean cluster here again. If I want to deploy my application that we’ll work on tonight, it’s kind of slow now. So if I want to try and deploy an application here on SQL, I shall just use Nginx back quicker and if I want to add a persistent folder, I’m just going to use slash data x, it doesn’t matter for this case. And I want to share it. And I want to deploy upside five replicas. But we’ll change what you see here, this will eventually run once you get the image to pull it, wait for the persistent volume, and then pull it in DigitalOcean so that it sends the instruction through to the backend. And we'll eventually go attaching and deploying. Just my luck, we'll end up leveling up on the node that I can’t put in maintenance mode. But so here we go. So this is running on SSH here. And you see here, this is stateful. So you got the replica zero as its name. And again, if I go into a console and do hostname, you see here that it’s given replica zero as a hostname. So this is showing you here that you’ve got this consistent name.
Now if I edit this and increase the number of replicas or change it in any way, it will keep that same thing over and over. So we’ll just actually do another one for you and show you what I mean here again. This time, I’ll just do a deployment. So this time, we’re going to do multiple replicas. And the same thing, it’ll go and connect to them, these all got random hostnames. Now, if I just try and choose one of these nodes here. So it’s five examples that are running three nodes. I click on that one, depending on what’s running on this node, I can potentially put it in drain mode. Again, if I put this into drain mode, but then it’s running on that one, I can’t do that one and do the other node. So I’ve got this one in drain mode. That’s going to evacuate those nodes that are running on there. And if I go back to my application, they will all be rescheduled with more random names. So get back into these and will be running somewhere else. And they’ll have random hostnames again. So you have to be careful here and exactly what I mean by this in regards to the random hostnames changing. So again, we have now all run on the same host as the other ones been put into drain mode. And these are running randomly. Now if I am creating another application here. So another application again, this one takes a bit longer, because we’re going to be creating a whole bunch of persistent volumes here it just takes a while to configure Nginx again.
This time, we want to be isolated and we want multiple replicas. So just one of the front-end predictions. So here when I’ve got this thing running here, so expanded, you’ll see there’s five of them. And you see here expand zero, so it’ll start with the first one. And then it’ll add the second and the third and the fourth and the fifth. And as it scales, it will increase them out there. Go 1, 2, 3, 4, 5 on. This is the five minutes now just to get back while waiting for that one to do and just explain this. So this is the signup order here. So when you are with a StatefulSet, they always start the lowest number to highest and they stop highest to lowest. So you’ve got when you first deploy your application, it starts with the very, very first node and it will go and bring that up and running, once it’s up and running, it will start the second, start the third and keep going through this stage. If you do a redeploy, it does it in reverse. So it stops, stops, and then restarts them. Same thing for doing an upgrade.
So that startup order matters. If you have something like a MySQL multi-master or single master-slave you need the very first one to be the master, and the other ones to be slaves. So it matters a lot there. We got two of them announced, see here, expand 012. And you see they’re going out and we’re creating the volumes. So there’s a volume per instance. So this is again, the correct way to do persistence across a multi-pod application. So this is just going to replicate out. And you’ll see, it’s just going to keep going all the way to five. So it’ll just keep going through 1, 2, 3, 4, 5. So again, as it’s running, it’ll keep going as it’s going there. And if you redeploy as I see it, it’ll stop them in reverse order and then restart them back in the right order again. So a really useful way to get a consistent hostname. And again, if I come in and just choose any one of these, grab a console. And hostname, you’ll see there, you’ve got the hostname, persistent in here as well. So that’s the primary information you need to realize in regards to stateful. So make sure that you are aware of the storage type and the access policy that your CSI driver supports. Don’t change it unless you know what you’re doing. So make sure that you don’t try and force a square peg into a round hole. Otherwise, you’ll end up with data corruption or issues where the application thinks it’s running bare, it’s not running and you’re using electron to triage mode. So don’t change that, change that at all. And also make sure if you need a true stateful application that you use StatefulSets, not deployments.
And again, it’s amazing how many people think that they can get persistence just by using deployment and attaching a persistent volume. I feel the string should have already finished now deploying all of them. And I’ll show you when we redeploy it. So they’re all done now. They’re all up and running here now. And if I was to take this node offline, which I can’t, because I’ve got too many nodes now and drain mode, these would be rescheduled to another node now. And that will then keep the same hostname, as it reschedules. So it’s an important thing to have there. Just with the storage configuration. Again, you can set this access policy to either of them. We have here a link to the documentation (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). This is something you need to understand here. This is the source of truth and doesn’t conflict with what is not in this table. That’s the short version there.
So you can see NFS, even though it supports ReadWriteMany. And again, this is where a lot of newbies keep getting tripped up just because it supports ReadWriteMany does not mean that you can automatically create a multi ReadWrite replica of your application if your application does not support it. So I hope that’s clear. So if you need persistence, you StatefulSets, don’t manually override your storage access policy. Don’t assume your application supports concurrent writes. Never, never assume that. It’s even more important if you’re using databases. But even if you’re not using databases, even if you’re using just a normal front end web server and your users FTP or uploading files through the front end, and less and less all of those know that file changes have been made, you definitely will end up with corruption. So don’t assume that using Portainer will make your life far easier. Any questions on this?
Bart Farrell 29:07
So a couple of things I just wanted to ask is that when you have been interacting with customers, as you mentioned, at the very beginning, a lot of folks that have just jumped from straight into like, on Kubernetes immediately. We look at different things. When you have the conversations of why run data on Kubernetes? What is the sort of benefits that you generally propose to them as to why they should keep this in mind?
Neil Cresswell 29:33
There’s a lot of people out there I think, who is a bit scared of running persistent apps on Kubernetes then they’re not sure of how it’s going to handle the workload they’re not sure how it’s going to handle graceful shutdown. So will it create my database gracefully when I stop it? Or is it just going to pull the plug and I’m going to end up with data corruption. There’s still a lot of fear there. There’s a lot of fear that says how am I going to backup and recover these applications If they run Kubernetes when a virtual machine or a database service, that everyone knows how to manage, how to snapshot your, your virtual machine to backup the data, you’ve got all of the VM integrations for data protection. If it’s a cloud service, you know how to do this. And I think a lot of people get a bit confused about what I do when it’s in a sort of container like, hang on, the actual image is always stateless, the image can always be reconfigured, the config maps can be recovered, the YAML can be saved and restored. You’re only talking about your persistent volumes. So how do you protect and backup those, also, there’s actually nothing stopping you from within your stateful application doing things like DB dumps, so just running the MySQL admin and dumping the database file out to an external location? So the main concern people have is that I move my application into it. What do I have to change operationally now to ensure my data is backed up and protected and all of those normal things that you’d expect for a critical record system, which generally describes persistence? And again, persistence has two levels, right data record, and just transient data. So I still need to persist, but it’s transient. And then data records. And data records are where people freak out.
Bart Farrell 31:24
Speaking of frequent voices, one thing we talked about is, what is reading data on Kubernetes? How is it done? But then the other question, as I said, why? And so that the alarm bells start to ring off, we don’t have the technical staff that has the skills to be able to do this, this is going to be a costly problem. Are there any sorts of other freakouts that you seem to be encountering when you interact with customers?
Neil Cresswell 31:46
I suppose the biggest one is when you’ve got stateful applications. Everyone’s quite comfortable with the whole git operation paradigm and fail-forward failback. So I’m going to change, live in production through my CD system. So a dev commits, through magic as smoke and mirrors. A few minutes later the changes are live and in production. If it breaks, the dev can revert the change, and then five minutes later, the change is now reverted in production. That’s all well and good. But when you introduce stateful applications and do their thing, what do you do when you’ve got schema changes? You can’t just rollback. So I think that that’s still some of the fear factors there is, if I’m going to go stateful, can I do stateful applications? And can I still do CICD? Roll forward, rollback. Because once you roll forward if you make a change and the data is persisted, how do you then revert that? It’s not as easy to just rollback. And that’s where I think people are saying, well hang on, is git ops a thing only for stateless applications and don’t need to be more traditional for stateful? Or can I do something in between where I can roll back my data as well, you can integrate with third-party backup vendors to recover? So that I suppose there’s also confusion there. It just shows it’s still very new. Very nice but for statefulness.
Bart Farrell 33:11
I think we can agree on that. And that’s what we see where it’s promising. And we saw this in the research report there. And we see the reports that have been coming out that there are a lot more folks that are running stateful application stateful workloads, and perhaps you might realize they’re not necessarily making this visible, we’re not sure about that. But once again, this is our job as a community to get these things out in the open. And one of the things that seem to be coming out in the open a fair amount. I would like to know your experience with this. What’s your experience been using operators if you’re using them for what purpose? Do you think that that’s the primary solution right now we have for running data on Kubernetes? What do you think about that?
Neil Cresswell 33:46
So I don’t have that much experience using them. However, I’ve understood what they are now. And I genuinely think they are going to be the painkiller for this. Because if you think about something like a Helmchart or a manifest file, there’s simply a way to get the application running. It doesn’t help you in any great way configuring the application for multi-cluster replication or correctly configuring it, it said that it’s the vendor saying, here’s how you deploy our app, not necessarily how you configure it. Whereas an operator moves it more up, more of a managed service Esque-type solution. It’s the vendor saying, Not only, this is how you deploy the application, but this is how you deploy and configure the application for best practice. So for example, the operator for MySQL is amazing. You can say I want to have a multi-master MySQL cluster, and it will go and configure it for you, including all of the replication everything, the whole thing. So you don’t need to know how to configure MySQL correctly. For multi-master replication. The operator will do that for you. The Helm chart does none of that for you. You need to deploy it and then log in and configure everything so that there’s a huge difference. So I think operators, especially in a stable world, are going to take away a lot of the risk for the application vendor to ensure that it’s configured correctly. Because if it’s configured incorrectly, and they lose data, you only lose that at once, then you’ve got a serious problem on your hands.
Bart Farrell 35:21
And once again, the risk of losing that data is, I think, why there’s this sort of aversion to okay, “I don’t want to put myself in any sort of situation where that might come up”. Now, we do have a question from someone in the audience Rich, which does touch on this where you’re at? You just mentioned multi-master. He asks for deploying MySQL, why would you not choose multi-master? And a replica set Percona, I guess, versus master with Aurora read-only replicas on other nodes? What about these new sharted? MySQL Like Vitesse?
Neil Cresswell 35:55
I’ve not used the shattered MySQL ones. But why would you use a multi-master versus a single master model replica? I think it comes down to latency. So you have to know the actual response time. But I’ve never. Whenever I’ve deployed MySQL in a load-balanced environment, I’ve always done it multi-master or I’ve never actually done it master multiple readers. I think the main reason that people do that is for data warehousing. So they use the one master node for their actual ReadWrite transaction, then the read-only ones they use for data warehousing queries or to clone data out for non-prod environments. But I’ve always used multi-master if I’ve ever needed to have a stateful MySQL database inside a cluster, the shattered ones like I play with things like crate and cockroach, which genuinely scale out your true multi-master. And those things are amazing. And they are the future of databases.
Bart Farrell 36:57
Following up with a couple of other things. If you had a magic wand, what would you know, what would your wish be to make running data on Kubernetes easier? Because we talked about operators, is there anything beyond that we could be more ambitious?
Neil Cresswell 37:16
I think having a proper seamless integration to snapshots and backups so that when you rollback to a previous config of your Kubernetes, makes it very easy to roll back. I do think that the rollback for a stateful application should include a rollback of its data state as well. So whether we can do that because that to me is the biggest barrier at the moment is once you make changes and write changes into your database, and schema changes, or whatever else, it’s so hard to revert that, you’re restoring a backup. As you push a change through your pipeline, if you’re updating the application, redeploying, or making a change, if there was a native way to snapshot and rollback as part of the rollback command, that would be gold. I think also making it far, far easier to export the volumes in a scheduled, scripted way for external backup as well. So whether there was a command to stream the volume to a third-party location, so that you’ve always got an off-cluster copy without having to rely on your storage vendor to do that for you. So I think about a magic wand, that’ll be the two things that I’d like to say.
Bart Farrell 38:32
As you said, this space is getting more and more attention. But we still feel that we are in the early days of this. So there are still things that have yet to be decided. And this is why we’ve been asking people in the community, what’s their experience been? Whether it’s been building operators, because that seems to be the case in a lot of folks out there is building an operator, but then what are the steps that are necessary for that to happen? Do you have the staff that can properly execute that without taking too much time away from your customers? Thinking about it, because you earned as someone in the position of CEO when someone comes to you when you’re explaining this to a customer? They’re thinking, okay we have the expression in the US, you’re in New York, if it doesn’t make dollars, it doesn’t make sense. What are the financial benefits of running data on Kubernetes?
Neil Cresswell 39:19
I think the main thing is it does let you get more into that whole microservice. So rather than having a singular large database service or server somewhere with multiple DB instances, now being able to have multiple smaller increments with a database tied very closely to the front end application that’s using the database. So making it far more self-contained. That means that you take away the need to ever maintain Windows on a single large server. It makes it far more cost-effective to scale it’s far easier to adequately define the resource limits and quotas and all that kind of stuff. So it just seems far more efficient to use. I suppose the only negative might be license, if using a licensed database, then obviously, you’re gonna have far more of them versus a smaller number. But there are those kinds of complications. But yeah, for me many are more around efficiency and also truly enabling microservice with a database can be entirely consumed by the front end application, as opposed to a shared database instance over here.
Bart Farrell 40:25
To keep that in mind. So we are getting towards the end. Is there anything else you’d like to mention in terms of news about Portainer? You’re hiring, you’re going all over the US right now. Next steps. We started, we just got another question. So before we get to that, I’ll ask another question from Rich, thank you, Rich, have you had to troubleshoot operators much, and what was helpful there? Have you ever had to just start over from the beginning and restored everything?
Neil Cresswell 40:51
I have often had to restore everything because we use Kubernetes ourselves in our internal testing environment for building Portainer. And I am breaking my environment almost weekly, it seems. And I have almost given up troubleshooting Kubernetes, I simply just delete and redeploy it, it’s quicker to redeploy. It is quicker to spin up a new cluster. I do most of my testing on either do or die nodes. And it’s simply so quick to delete and redeploy. And so that’s the model I use. I very rarely know that might just be that I’m lazy. And I hate triaging queued these issues, but I was always just simply building a new cluster and redeploying applications, and I’m up and running. And how I need to triage operators, not yet myself, no. Just one thing, by the way, that it’s really important in regards to the stateful applications. So if you think back three, four, or five years ago, there were expert storage consultants inside it, who whenever someone on the business said, “Hey, I’m deploying a new application, it’s going to need a database.” And they went into some very detailed Excel-based analytics, and calculations and formulas on how many I ops do I need, what kind of latency? Do I have the exchange sizing tools, the MS SQL sizing tools, what RAID array, how many spindles do you need? All of that stuff seems to have been forgotten. And I don’t know why. All of the thought that went into how do I ensure that I have a storage subsystem that is sufficient to deliver the level latency that my application needs, the level of ops and throughput that my application needs, there was a huge amount of thought went into that to make sure that that the backend storage subsystem would be suitably sized to make sure the database would deliver the level of performance that the application needed. A lot of stuff has been forgotten these days. And I think that unless we figure that out, you’re going to find that stateful-based applications are biting people because they’re simply underperforming. After all, there’s just not been the sizing of thought put into it. So people say I’ve got my database up and running, it’s persisted. It’s performing badly, why what’s going on, what’s wrong, and it could be, well hang on a second, you’re running on some nasty backend storage. So you can’t run this on NFS. Everyone knows you don’t run databases on NFS, you just don’t. But from a dev who’s using Kubernetes, maybe they don’t know the actual storage driver that their admins have configured as on NFS. They don’t know. So they just go and deploy an application to make it stateful. And it performs terribly. So there’s still this wall between, I don’t need to care about the infrastructure, I just want to deploy my app, when it comes to statefulness. I don’t think that wall is as solid as we’d like to believe. You still need to understand what’s under the covers. Is this thing going to be performant enough for my application? Do I need or don’t need NVMe-backed storage as opposed to SATA back storage? And how can I ensure that I’m getting the right level of performance? There’s kind of no way of measuring or tearing that in Kubernetes. Today, you can’t say this storage has been benchmarked by the Kube that is a layer and can and can deliver X. There’s not that at the moment. You sure you can set limits and reservations, but at least you know what the backend can provide you, you’re in dire straits. And that’s kind of a note of another area that needs careful thought.
Bart Farrell 44:33
Plenty to chew on there. Is there anything else perhaps reflecting on maybe some of the things you saw in Kubecon or the things that you feel are not getting enough attention in addition to what you mentioned?
Neil Cresswell 44:47
I think the ease of use is being forgotten. I’m probably saying that because I’m biased because our whole goal in Portainer is to make Kubernetes easy to use. Maybe I’m biased, but Kubernetes is getting so wide. Now it can pretty much do anything that you want it to do. But by going wider and doing more and more things, you’re adding high layers of complexity. Now the fact that you have a tool that can be made is a Swiss army knife, you can do anything, how do you know which particular blade you need, you need to follow that knife. So, the wider it goes, the broader the border of scope that it could go, more things that can deploy an operator, I think is actually in some way making this worse they’re wider and wider. The actual mental load on the user increases quite dramatically. So that’s one area I think, is kind of being almost forgotten. It’s like we’re adding quite a bit of extra complexity here.
Bart Farrell 45:52
How do we make that go away? And that’s one of the arguments as well, for renegade on Kubernetes is simplifying by having everything in one stack. And like you said, by doing that, you resolve one problem and potentially create another. So how do we then simplify the thing that was supposed to simplify, but made it more complex?
Neil Cresswell 46:07
It’s no silver bullets. But that makes it so much easier. We can say, “here’s my application, it’s in this bubble, it’s portable. I can now measure and manage and monitor this application in totality, its application doesn’t perform well enough, I can move it somewhere else, that’s more performant.” That’s far, far easier than having to worry about a shared service over here, and I’ve got some system coming into my front-end applications back out of the cluster, to potentially some middleware or another cluster, back to somewhere else in the database. Having an all self-contained makes it nice and easy and portable.
Bart Farrell 46:42
Thank you very much for taking these questions as well from the audience. And I like your vision on this and identifying the problems. But also, the thing is that there is progress being made. And we see that once again, we’re approaching livestream number 100. So we’ve had nearly 100 people come on, and share their experiences of what it’s been like working with ready data on Kubernetes. And we see an increasing ecosystem at the beginning more focused on databases and storage, but now branching out into other areas, such as data streaming, even security, different things that are coming in their cost optimization. So it’s building out and getting more robust. But as we said earlier, there’s still a lot of ground to cover, and we’re somewhat in the early days, which makes it exciting because there is uncertainty. Now that we are getting towards the end, though, is there anything Portainer news-related that we need to be aware of stuff that’s coming out? Anything going on, hiring, firing anything, hopefully not firing?
Neil Cresswell 47:41
Never that. So we released a version of Portainer, the Portainer CE 2.9. Release just for Kubecon, with what we like to describe as an incredibly awesome Kubernetes experience. So if you want to get started with Kubernetes, and you want a far gentler on-ramp to the technology, take a look at Portainer. Portainer is a commercial open source company. So we have an open-source version a paid version and the Portainer business edition that’s due out very soon, in the next two weeks, we will be offering a five-nodes free forever license for that one. So for those people who want to get started with our commercial variant, which adds additional capability around audit and governance and security, if you want to get started with that look out for our five nodes free forever license coming to you in November very, very soon.
Bart Farrell 48:41
Last but certainly not least, we have a tradition in our community that while the talk is ongoing, we have someone who’s lurking in the shadows, and he’s creating an artistic summary of all the things that have been going on. So let me know when you can see my screen.
Neil Cresswell 48:56
Like I can see it. Good.
Bart Farrell 48:59
There was a lot of stuff that was mentioned there. And hopefully, we can take a look at the slides later on. Because some of the folks were asking about that. Very practical the demo gods were on we’re on your side today. It sounds good to see as well. I’m going back and forth between the slides, very well explained and precisely, as you mentioned is that this onboarding experience for lots of folks getting into Kubernetes is so overwhelming that it becomes a turnoff. And I have friends that have done DevOps have done big data, they’ve done all these other things. And when it comes to Kubernetes, they’re like, Nah, now this is not for me. And so it’s refreshing to see that there are folks in the space that are in the ecosystem that are taking those things seriously, as much as we talk about technological stuff. A lot of it is empathizing with people and their struggles and difficulties and frustrations. And for these things not to be so siloed in a smaller group of individuals, it needs to be democratized and broadened out by becoming simpler, or as some people say by becoming a little bit more boring. Not the word that we would precisely like made more comfortable I think would be better to say the boring. So anyway, Neil, thank you very much for your time today. It was a pleasure, great interaction with the audience as well as shouting out to everyone who has asked me questions. If you want to continue the conversation, feel free to do so in our slack. If you’re not subscribed already, just hit the subscribe button. It’s really easy. We’ll have another live stream coming up this week and plenty more action as always. So Neil, enjoy your time in New York for the rest of your time in the United States and hopefully talk to you soon. All right,
Neil Cresswell 50:20
Thank you.
Bart Farrell 50:21
Take care, everyone. Bye
Published at DZone with permission of Sylvain Kalache. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments