Editor’s note: This post was last updated on 2 December 2021 to include information about recent updates to GKE, EKS, and Microsoft Azure.
Assuming you’ve already decided to run Kubernetes, also called K8s, you may have some concerns and questions regarding where to run it.
There are actually quite a few cloud providers that support Kubernetes, but we’ll focus on the three major ones, Google Kubernetes Engine (GKE), Microsoft’s Azure Kubernetes Service (AKS), and Amazon’s Elastic Kubernetes Service (EKS).
In this article, I’ll discuss the main features and capabilities of the three main cloud providers, presenting what I think are some crystal clear criteria for choosing your target platform: your own on-prem data centers and virtual service providers.
As a modular, flexible, and extensible platform, you can deploy K8s on-prem, in a third-party data center, in any of the popular cloud providers, or even across multiple cloud providers. The right course of action, of course, depends on your unique project. Let’s consider the following scenarios.
For one, let’s say you run your system on-prem or in third-party data centers. You invested a lot of time, money, and training into your bespoke infrastructure. The challenges of roll-your-own infrastructure become more and more burdensome over time.
Alternately, you run your non-Kubernetes system on one of the cloud providers, but you want to benefit from the strengths of K8s.
You’ll note that in both scenarios, I didn’t mention containers. If you’re already containerized , good for you! If not, consider it an entry fee.
You may have very good reasons to run in an environment that you control closely, for example, security, regulations, compliance, performance, or cost. In this case, the cloud providers might be a non-starter. However, you can still reap the benefits of K8s by running it yourself. Since you’re already doing it, you have the expertise, skills, and experience to manage the underlying infrastructure.
However, if you made the choice to invest in your on-prem infrastructure, or you’re deployed across multiple virtual service providers simply because you started before the cloud was a reliable solution for the enterprise, it’s a different story.
You have the opportunity to upgrade everything in one complicated shot. You can switch to managed infrastructure in the cloud, package your system in containers, and orchestrate them using K8s.
In the second scenario, choosing to run K8s managed by your cloud provider is probably a no-brainer since you already run in the cloud. K8s gives you the opportunity to replace a lot of layers of management, monitoring, and security you had to build, integrate, and maintain yourself with a slick experience, which improves each time Kubernetes releases another version.
Now that we’ve reviewed some of the fundamentals of Kubernetes, let’s take a look at the three cloud providers we listed earlier.
K8s came from Google, making GKE Google’s official method for managing Kubernetes. Google SREs manage the K8s control plane and provide auto-upgrades. Because Google influences K8s and has used it as the container orchestration solution of the Google cloud platform since day one, it makes sense that GKE would provide the best integration.
Similarly, you can trust GKE as the most up to date provider. GKE performs more testing on new K8s features and capabilities than other cloud providers.
On GKE, you don’t have to pay for the K8s control plane, however, you do need to pay for the worker nodes. You’ll also get access to the Google Container Registry (GCR), as well as Stackdriver Logging and Stackdriver Monitoring, which provide integrated central logging and monitoring. Lastly, to secure and optimize your deployment pipeline, you can leverage the native CI/CD tooling included with K8s.
Google networking is considered top of the line in comparison to other cloud providers. At the time of writing, support for multi-instance GPUs in GKE is in preview.
GKE has some other neat tricks, for example, it takes advantage of general purpose K8s concepts like Service and Ingress for fine-grained control over load balancing. If your K8s service is of type LoadBalancer
, GKE will expose it to the world via a plain L4 TCP load balancer. However, if you create an Ingress
object in front of your service, GKE will create an L7 load balancer capable of doing SSL termination. If you annotate it correctly, the L7 load balancer is even capable of allowing gRPC traffic if you annotate it correctly.
GKE also offers two different operations modes, standard and autopilot. The standard experience has been offered from the beginning, while autopilot is an automated manager of your cluster that only charges you for running pods to optimize billing.
GKE even includes prebuilt K8s templates that you can use for your deployments. These are available on Google Cloud Marketplace.
K8s itself is platform agnostic. In theory, you can easily switch from any cloud platform to another as well as run on your own infrastructure. In practice, when you choose a platform provider, you often want to utilize and benefit from the specific services that require extra work to migrate to a different provider or on-prem.
GKE On-Prem provides tools, integrations, and access to help unify the experience and treat on-prem clusters as if they run in the cloud. It’s not fully transparent, and it shouldn’t be, but it helps.
Microsoft Azure originally offered a solution called ACS, which supported Apache Mesos, K8s, and Docker Swarm. In October 2017, it introduced Azure Kubernetes Service, AKS, as a dedicated Kubernetes hosting service, causing the other options to fizzle out.
AKS is Microsoft’s K8s solution offering. AKS is very similar to GKE, for example, it also manages a K8s cluster for you for free. It is certified by CNCF as K8s conformant, meaning there are no custom hacks.
Microsoft invested resources heavily into K8s in general and AKS in particular. AKS includes strong integration with ActiveDirectory
for authentication and authorization, integrated monitoring and logging, and Azure storage. You also get built-in container registry, networking, and GPU-enabled nodes.
One of the most interesting features of AKS is its usage of the virtual-kublet project to integrate with Azure Container Instances (ACI). The ACI takes away the need to provision nodes for your cluster, which is a huge burden if you’re dealing with a highly variable load.
In comparison to GKE, which is essentially the gold standard, developers share a few of the same criticisms of AKS. In the very first implementations of AKS, setting up a cluster on AKS took a long time, on average 20 minutes. Additionally, the startup time experienced high volatility, taking more than an hour on some occasions, causing a poor developer experience.
Originally, you’d also need some combination of a web UI, like Azure Portal Manager, PowerShell, and a plain CLI to provision and set everything up. However, with recent upgrades, the experience has been improved with integrations with Visual Studio, Azure DevOps for pipeline management, and Azure Monitor for logging and monitoring your infrastructure.
Azure Monitor includes optimizations to help your application’s performance overall. For example, it tells you what is running on each node, as well as average CPU and memory utilization. It also provides information about which containers reside in a controller or a pod, helping you monitor overall performance.
Azure Monitor reviews the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. It understands the behavior of the cluster under average and heaviest loads, helping you identify capacity needs and determine the maximum load that the cluster can sustain.
Today, both Visual Studio and Azure DevOps are widely used by developers. Support and integrations with those two services greatly improve the AKS developer experience.
Amazon was a little late to the K8s scene. It always had its own Elastic Container Service (ECS) container orchestration platform. However, customer demand for K8s was overwhelming. Many organizations ran their K8s clusters on EC2 using Kubernetes Operations (kOps) or similar tools.
AWS moved to provide proper support with official integrations, and today, EKS integrates with IAM for identity management, AWS load balancers, networking, and other various storage options.
An interesting twist for AWS is the promised integration with Fargate, which is similar to AKS with ACI. The integration will eliminate the need to provision worker nodes. Additionally, it may potentially allow Kubernetes to automatically scale up and down via its Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) capabilities, providing a truly elastic experience.
On EKS, you have to pay for the managed control plane, which could be a limiting factor if you just want to experiment with K8s or you have lots of small clusters.
As far as performance goes, EKS falls somewhere in the middle. It takes about ten to 15 minutes to start a cluster. As you’d expect, the performance of complex distributed systems is very nuanced, and it can’t be captured by a single metric. That said, EKS itself is still relatively new, and it might take a while until it is able to take full advantage of the robust foundation of AWS.
EKS also has offerings tailored to several different usecases. For one, you can leverage EKS with a completely cloud based infrastructure, including Fargate or EC2s. Amazon EKS Anywhere is a prepackaged solution that can be run in on prem or hybrid environments. Lastly, Amazon EKS Distro lets you deploy EKS with your own tooling.
In addition to the implementation of K8s that we’ve covered above, there is a newer, lightweight alternative called K3s. K3s is targeted toward edge or IoT-based applications, and as a single packaged binary, it has a more lightweight footprint than traditional K8s.
You can set up and install K3s with the major cloud providers, but you have to do a lot of the legwork yourself by installing the binary on the cloud instance that you are working with.
However, this opens up the market for clusters to more distributed devices, which could be a great offering for customers interested in the benefits of K8s without the traditional infrastructure requirements.
We know that Kubernetes is an excellent choice for container orchestration, but the big question for you is where you should run it. Usually, the answer is simple. If you’re already running on one of the cloud providers, just migrate your system to K8s on that cloud platform.
If you have specialized needs and run on your own hardware, you can either run your own K8s cluster, or you can take the opportunity of having such a big infrastructure migration project and move to the cloud already.
You could also use K3s if you have more specialized needs but want some of the benefits that K8s has traditionally provided. Whatever your use case, there are a lot of options for your applications.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]