Managing Kubernetes is hard, and many organizations are starting to realize they can better focus on other, as-yet unsolved engineering problems if they hand off a big chunk of their container orchestration responsibilities to managed service providers.
Today, the most popular managed Kubernetes options—sometimes referred to as Kubernetes as a service (KaaS)—are Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Each cloud provider offers more and more managed versions of these services—such as the highly opinionated GKE Autopilot and the serverless EKS Fargate—since first launching around 2018. There are other options, such as Rancher, Red Hat OpenShift, and VMware Tanzu, but the Big Three cloud vendors dominate this area.
Cloud vendors have strived to find the right balance between allowing customers to control and integrate the things they need and abstracting tricky autoscaling, upgrade, configuration, and cluster management tasks. The maturation of these managed services has led many organizations to the realization that managing their own Kubernetes clusters is taxing and nondifferentiating work that is increasingly unnecessary.
“Folks going all the way down to open source binaries and writing their own tooling is a pretty extreme example, and there are very few reasons to do that today, unless you are using Kubernetes in a way that is really unique,” said Joe Beda, Kubernetes’s cofounder and principal engineer at VMware Tanzu.
“There are always exceptions for organizations with strong engineering and operations chops to run Kubernetes themselves, but it became clear for most customers that became a daunting task,” said Deepak Singh, vice president of compute services at Amazon Web Services. “The challenge of scaling Kubernetes, the complexity of managing the control plane, the API layer, the database—that isn’t for the faint of heart.”
Brendan Burns, corporate vice president for Azure Compute and formerly a lead engineer on Kubernetes at Google, sees this newfound appetite for managed Kubernetes services as being driven by the dual factors of better enterprise functionality—specifically features such as private network support and consistent policy management capabilities—and the broader business drivers toward increased agility and velocity.
What changed with the managed services?
Stephen O’Grady, cofounder of the developer-focused analyst firm RedMonk, sees a similar pattern playing out with Kubernetes today as previously occurred with databases and CRM, where no administrator would hand over their crown jewels to a managed provider—until they did.
“When enterprises consider something strategic, the initial inclination is to run it themselves,” he said. “Then they realize over time as they acclimate that not only is it not giving them any competitive advantage, it is more likely than not the vendors can run it better than they can. Is every enterprise going down this route? Not yet, but the appetite and direction of travel seems clear.”
Ihor Dvoretskyi, a developer advocate at the Cloud Native Computing Foundation (CNCF), is seeing this trend play out across a wide variety of Kubernetes users. “These days, we can see bigger customers in regulated environments using managed services more intensively than before,” he said.
Take the financial data giant Bloomberg. Back in 2019 head of compute infrastructure Andrey Rybka told InfoWorld, “You really have to have an expert team that is in touch with upstream Kubernetes and the CNCF and the whole ecosystem to have that in-house knowledge. You can’t just rely on a vendor and need to understand all the complexities around this.”
Fast-forward to today. Bloomberg now has workloads in production with all three major managed Kubernetes services. What changed?
“The cloud providers have been making a good effort to improve the quality of service around their Kubernetes offerings,” Rybka said. “So far, the trend line has been really good toward the maturation of managed services.”
It also comes down to using the right tool for the specific job. Bloomberg still runs about 80% of its Kubernetes workloads on-premises, and it has invested heavily in developing the in-house skills to reliably manage that environment and an internal developer platform on top of it. For cloud appropriate workloads, however, “we are reliant on the managed Kubernetes offerings, because we can’t do a better job,” he said.
The growing appetite for managed Kubernetes
Wherever you look, the numbers reflect this shift away from self-managed open source Kubernetes to managed distributions.
In the latest CNCF Cloud Native survey, 26% of respondents use a managed Kubernetes services, up from 23% the year before and catching up fast to on-premises installations, at 31%. Those respondents being CNCF members may skew that number to self-managing organizations that would traditionally tinker with their own Kubernetes clusters. So the actual usage of managed Kubernetes could be higher than the CNCF survey indicates.
Flexera’s 2021 State of Cloud report shows that 51% of respondents use AWS managed container options, which includes both Amazon EKS and Amazon’s non-Kubernetes ECS service. Self-managed Kubernetes is at 48%, just above Azure’s managed Kubernetes service (AKS) at 43% and Google’s (GKE) further down at 31%.
According to Datadog’s latest Container Report, roughly 90% of organizations running Kubernetes on Google Cloud rely on GKE, and AKS is fast becoming the norm for Kubernetes users on Azure, with two-thirds of respondents having adopted it. Meanwhile, Amazon’s EKS is up 10% year-on-year and continues to climb steadily.
At AWS specifically, Singh says “very few customers who start on AWS today don’t start on EKS, and a large number of customers who did run their own Kubernetes now run on EKS, because [running it themselves] is just not worth it.” For example, flight metasearch engine Skyscanner recently moved away from self-managing its Kubernetes in favor of EKS, he said.
Why go with a managed Kubernetes service?
Lack of internal expertise, ensuring security, and actually managing containerized environments were among the most cited Kubernetes challenges among respondents to the Flexera survey.
At organizations with fewer than 1,000 employees and where cloud-native expertise is harder to come by, managed Kubernetes is even more popular, the Flexera survey showed. AWS managed options are by far the most prevalent way to manage containers, at 52%, with self-managed Kubernetes at 37%, Azure-managed at 35%, and GKE-managed at 23%.
The CNCF’s Dvoretskyi cites management overhead and time and resource consumption as the leading drivers to adopting managed Kubernetes. “If they can be satisfied by a managed service, it is an obvious choice to not reinvent the wheel,” he said.
For global travel technology company Amadeus, managed Kubernetes services fulfill their promise of simplified management. Amadeus has been steadily shifting towards Kubernetes as its underlying infrastructure since 2017.
“It is less work, let’s be clear. It is operated for us, and that matters because we have a challenge to have all the people we need to run [Kubernetes],” said Sylvain Roy, senior vice president of technology platforms and engineering at the company. Today, Amadeus runs about a quarter of all workloads on a Kubernetes cluster, either on-premises or in the private or public cloud, primarily through Red Hat’s OpenShift platform.
“The number one factor is the total cost of ownership: How much will it cost and how many people do we need to operate it compared to our own setup?” Roy said about considering a workload for managed Kubernetes.
Amadeus has not yet moved any workloads to a managed service, but following a new deal with Microsoft, it is testing AKS and other managed services “where and when it makes sense.”
For now, that doesn’t include core applications. But for “the tooling and apps for which are not core to what we do, and for smaller, niche use cases, using something like AKS makes sense,” Roy said.
The issue of trust in Kubernetes service vendors
For many organizations, the decision to use a managed Kubernetes service boils down to trust, as the vendors acknowledge.
“There was a fear when Kubernetes came out that it was a bait-and-switch, a land grab from vendors to take from open communities and that it would morph into open core. It has taken five, six years almost to disprove that,” said Kelsey Hightower, a principal engineer at Google Cloud.
Similarly, AWS’s Singh said it is important to some customers that EKS stays close to the open source distribution of Kubernetes, “with no weird voodoo going on there that would create differences.” AWS recently open-sourced its EKS Distro on GitHub as a way to prove this out.
VMware’s Beda admits that “it is hard to have this conversation without talking about lock-in,” and urges anyone making these buying decisions to assess the risks appropriately. “How likely are you to move away? If you do, what will be the cost of doing that? How much code rewriting will you need to do and how much retraining? Anybody making these investments needs to understand the requirements, risks, and trade-offs to them,” he said.
For its part, the CNCF runs the Certified Kubernetes Conformance Program that ensures interoperability from one installation to the next, regardless of who the certified vendor is.
Why isn’t everyone on the managed Kubernetes train?
At companies as large and complex as Bloomberg and Amadeus, some legacy or highly sensitive workloads will simply have to remain on-premises, where the Kubernetes clusters they run on will likely remain self-managed for some time yet.
“Those who want to self-manage parts will be worried about the data plane; they need to customize or specialize in certain areas. They don’t mind a managed control plane,” Google’s Hightower said.
AWS’s Singh sees two types of customers who have yet to jump on the managed Kubernetes bandwagon: those he defines as “builders,” and those with deeply entwined dependencies. For the builder class, “our focus is recognizing them and spending time to give core Kubernetes on AWS,” with projects like the open source Karpenter autoscaler an example.
“The second class is someone that does not run pure Kubernetes, and they have made forks and changes and picked up dependencies where a managed control plane they can’t access becomes a problem. They have built a Franken-Kubernetes, and it takes them some time to get back to vanilla Kubernetes,” he said.
For organizations that have already made big investments in developing and hiring the skills required to fine-tune their own Kubernetes clusters, those skills aren’t going to waste just because you adopt some managed services where appropriate, said the CNCF’s Dvoretskyi.
“Those skills are definitely not useless,” Dvoretskyi said. “Even if you are using fully managed Kubernetes and only writing some apps on top of your existing cluster, knowing how it works under the hood helps build those more efficiently.”
At this stage in the life cycle of Kubernetes as a core enterprise technology, all the signs point toward there being fewer and fewer compelling reasons for getting under the hood with your own Kubernetes setup.
“Perhaps you see it as an existing investment that no one wants to write off as a sunk cost yet, or there are conservative organizational concerns about a set of workloads or the business,” O’Grady said. “Or there is apprehension to have a piece of your infrastructure, which is perceived as strategic, leave your control. But when you see your peers doing it, that apprehension goes away, and you will see more people realizing the benefits.”
Copyright © 2021 IDG Communications, Inc.