A recent study entitled The State of Kubernetes in 2022, sponsored by our partners at VMware, shows continued strong growth in Kubernetes adoption, despite its complexity. The report found that Kubernetes has become a dominant force in enterprise application development and deployment, with four-fifths of organisations planning to run more than one container orchestration tool in the next few years.
Kubernetes is the leading open-source system for automating container operations, and it is gaining wide adoption in production environments. Kubernetes users are becoming more confident in, and expectant of, their ability to scale. It enables organisations to manage diverse deployments by providing a common mechanism for automation and standardised deployment, operation and management of clusters. The technology is in heavy use for production workloads, and allows companies to manage their complex IT environments so that they can focus on innovation and growth. However, this comes at a cost: scale.
The State of Kubernetes report shows that on average, 85% of survey respondents use Kubernetes in production, and in some cases it’s being used by 95% of their users daily. 12% of respondents have 5 or fewer clusters, and 29% are operating 50+ clusters. Two-thirds of those with 100+ clusters expect usage to grow by a further 50% this coming year.
So how many nodes are users operating in a cluster? Responses in the survey varied, with a majority at 61% said they operate between 6 and 20 nodes. We’re then lacking information around why and how they come to this decision. A key aspect we need to take into consideration is what the structure of these clusters looks like. How are the nodes carved up? What are best practices around control plane nodes, that enable them to be neither too big or too small?
And just how big are those nodes? Enterprises don’t want to be reigning in 6 massive (in complexity and scale) nodes, because the blast radius of each of these failing is huge, whereas in contrast the ongoing maintenance of lots of small nodes is higher. There is a big jump between 6 to 20 nodes, particularly with little detail on how these are made up.
Questioned about the rationale for Kubernetes adoption, the biggest stated benefit was increased flexibility of applications (62%), followed by improved cloud utilisation, better developer efficiency, cost reduction, and for 37%, the need to improve operator efficiency.
The State of Kubernetes survey data provides valuable insight into how real-world users are leveraging Kubernetes to build, manage and secure applications and workloads across public cloud platforms. The results reflect an ever-evolving industry as organisations move towards hybrid cloud environments where they can use the native tools and interfaces of different public clouds to deploy applications and services, and use on-premises infrastructure for sensitive workloads. Criticism of the report has pointed to a missing dataset on why businesses are adopting multi-cloud Kubernetes. In our experience, multi-cloud Kubernetes is here to stay.
Getting started is a little too easy
Hyperscaler Clouds have made the entry point for users really easy, but this comes at the cost of losing track of what you’re spinning up. When you are in the situation of manually building all of the elements of a cluster with no shortcuts, you learn all of the intricacies of how they work.
When someone takes away that struggle, suddenly, you don’t know how to manage most aspects of a cluster. A question that’s often overlooked with this ease of spinning up is “Why do you think you need that many clusters?”.
It may be technically easier to deploy one cluster across one site, rather than stretch it across multiple sites, but this comes at the cost of trading positives for negatives. Spinning up multiple separate clusters means you need to build a strong communicative network between them, particularly if they are hosting concurrent microservices.
Having one cluster running across multiple cloud providers offers complete resilience if you can structure it in the right way; businesses need to account for robust security models and ensure diversity across providers, both of which can be difficult to execute. Solutions are available, however. For instance, at Ori, we allow deployment to a mesh over multiple clusters, as a hybrid cluster overlaid across your infrastructure. If your clusters are separate and segregated from each other, then your workloads can’t easily be linked and therefore can’t interact with each other.
So how did we get here?
Depending on the finer detail, the numbers in the State of Kubernetes report are scary, particularly the sizes some clusters have reached, and the growing rate of adoption for something that remains relatively new as a technology. A finer detail in the report would be how many people are using it, while they claim to not have the skills to maintain it.
Kubernetes migration horror stories all begin very much the same: a senior stakeholder comes in and proclaims “We’ve got to containerise everything!”. A lot of the early adoption is driven by hype, without really thinking about whether it’s the right technology for an app, platform or solution. What is the best way to ascertain the demand if the adoption is not so clearly driven by need, as opposed to assumed proactive adoption?
The biggest challenge called out by respondents is security and compliance. Enterprises and developers are running anything and everything on the internet in Kubernetes deployments, while their biggest concern is securing them. This seems almost counterintuitive given the growing rate of adoption. Historically, technologies that are hard to secure would be avoided, or deployed and scaled extremely carefully.
It’s eye-opening because it is moving so fast. If a business can get a cluster running in AWS in 5 minutes, it raises the question “where are all the normal processes and best practices?”. With control and manoeuvrability compromised, and increasing hyperscaler costs, many businesses need hybrid multi-cloud.