A Guide to Kubernetes Distributions
Click here to view this blog post as an infographic.
While the efficiency and long-term viability of Kubernetes are apparent, it has a reputation for complexity. To that end, CloudOps has compiled a summary of the best known tools for managing Kubernetes clusters. Facilitating the deployment of containers at scale, Kubernetes allows application design to be standardized into modular and portable microservices that can be easily deployed over multiple cloud environments. Organizations wanting to leverage the capabilities of Kubernetes can look to a variety of open source tools, distributions, and managed services.
Generically referred to as ‘vanilla’ Kubernetes, open source tools make your internal Operations team responsible for your entire container deployment. The success of the deployment will depend on their expertise. An experienced team will know how to take advantage of the flexibility provided and plan the deployment around your business’ unique requirements. They will know how to manage version upgrades and, through contributing to the source code, add features that suit your application’s schedule and requirements. In addition to not paying a licensing cost, open source tools can provide you with full control over the destiny of your container deployment.
However, an inexperienced Operations team might struggle with the available options. Kubespray is known for lengthy deployment times, Kops is only available through limited cloud solutions, and Kubeadm provides complicated High Availability (HA) installation. While the options for open source Kubernetes installations are neither difficult nor complex in of themselves, the sheer number of configuration and deployment options can make Kubernetes very difficult to set up for production use. The lack of enterprise features, especially Lightweight Directory Access Protocol (LDAP) support and Role Based Access Control (RBAC), might limit the ability of some organizations to adopt an open source strategy. If you choose to leverage open source tools, you will have more control over your destiny but be prepared for more complexity.
Distributions offer a compromise between the flexibility of open source tooling and the ease of managed services. They still require an internal Operations team to oversee the deployment, but they simplify the process of adopting Kubernetes by presenting opinionated tools for building and managing clusters. Distribution vendors often provide complete platforms that define processes for running builds and tests, creating images, deploying, and staging production. Priced support contracts are generally available in addition to value-added features, such as LDAP support with RBAC. This does increase the risk of vendor lock-in. While distributions do limit the potential to customize your experience (exempting the management of version upgrades), they allow developers to automate container operations much more quickly. Below are a few well known distributions available today.
Red Hat OpenShift — Delivered as an opinionated PaaS built on top of a Kubernetes infrastructure, Red Hat OpenShift offers more than other distributions. It is a full platform solution that oversees all aspects of the software development life cycle including access control, building code, running tests, creating and uploading images to an image repository, and deploying published images and application clusters. The entire stack has guaranteed interoperability between the OS (RHEL), orchestration layer (Kubernetes), and runtime (Docker). Updates are validated and released in batches, which ensures cohesion but can result in feature lag. Users can choose between an ‘Online’ option (grants you access to a tenant in their deployment), a ‘Dedicated’ option (they manage it for you), a ‘Container Platform’ option (allows its deployment in your data center and includes supported storage integration in the form of Gluster as a paid add-on), and the ‘Origin’ option (open source version). Overall, Red Hat provides an extremely stable enterprise offering that is consistently easy to implement and manage.
Rancher — Rancher is deterministic in deployment and lightweight in installation. It is open source and requires no support contract. The useable and manageable platform lends itself to simplicity and flexibility that, along with the easy management of multiple Kubernetes clusters, make it ideal for straightforward infrastructures. It also provides Active Directory (AD), LDAP, and Security Assurance Markup Language (SAML) support. Rancher 2.0 has allowed it to standardize its orchestration layer on Kubernetes and support multi-cluster management, alerts and log aggregation, and application workload management across any Kubernetes clusters.
Tectonic — Tectonic employs Kubernetes and the CoreOS stack to run Linux containers. Backed by CoreOS, it enables the user to leverage CoreOS Container Linux, a lightweight container operating system. Tectonic also supports Quay Enterprise, a multi-tenant container registry with image vulnerability scanning. A monitoring stack is included within the core product for improved operational visibility. Tectonic provides fully managed Vault secret management instances on demand with support for automated updates, high availability and backup/restore. Tectonic deploys a ‘vanilla’-like form of Kubernetes with added enterprise features, such as RBAC and LDAP support. No support contract is required for small deployments of up to ten nodes.
Canonical — Canonical offers an opinionated deployment that utilizes Ubuntu for the entirety of its node configuration. While AD and LDAP support are provided, upgrades are non-trivial. Like Tectonic, Canonical deploys a ‘vanilla’-like form of Kubernetes with a few enterprise features. In addition to a Kubernetes distribution, Canonical also offers managed Kubernetes services that can run in either your data centers or in public clouds. Canonical partners with Google to allow GKE worker nodes to leverage Canonical’s Kubernetes distribution, enabling a fully managed offering that includes both master nodes (Google) and worker nodes (Canonical).
Managed Kubernetes offerings enable enterprises to entrust the container orchestration to the service provider within the security of an SLA. While they force you to adapt your application to the service, they make the process of adopting and managing Kubernetes clusters easier by offering in-depth services that vary amongst providers. Most managed Kubernetes services provide and operate master nodes in addition to service integrations, such as ingress controllers, storage, image registry, and identity management. Many also offer container optimized operating systems for worker nodes. Public clouds leverage their existing resources to provide infrastructure, which additionally removes the need to purchase and maintain hardware. They simplify and expedite the process of installing and managing containers.
Managed services inform leaner deployments and smaller, more focused development and operations teams. Some flexibility is sacrificed because you are forced to adapt your application life-cycle to the service. As master nodes are automatically upgraded (roughly every three months), worker nodes must be kept current (usually within two versions) to avoid becoming unsupported. Version upgrades can furthermore introduce feature changes that your application is not yet ready to support. Likewise, dependence on certain features can increase the chance of vendor lock-in. Managed Kubernetes can ease the installation and management of Kubernetes itself, but first check to see how the limitations could affect your business.
Google Kubernetes Engine (GKE) — GKE was the original managed Kubernetes service in the market and, as such, has the most mature offering. Kubernetes was open sourced by Google and they consequently contribute more than anyone to the source code. GKE goes beyond the standard and expected features to include, for example, the automatic upgrade and autoscaling of worker nodes through their administration portal and integrated cloud service features. Management of the master nodes is done by Google as part of the GKE offering and, while you don’t have access to manipulate the nodes according to your needs, you also aren’t charged for their computing resources. Additionally, you can trust that the master nodes are deployed in HA with an SLA. GKE works well with both Google Cloud Storage and Google’s identity management, and allows easy integration with other Google Cloud services.
Amazon Elastic Container Service for Kubernetes (EKS) — While newer to the market, Amazon’s Kubernetes service is expected to eventually have equivalent functionality to Google’s GKE offering. AWS is, generally speaking, the most mature cloud offering on the market with extensive value-added services and integrations available. It is only a matter of time before the EKS offering is able to fully leverage this extensive ecosystem of services to deliver a fully integrated solution. EKS is an AWS managed Kubernetes deployment offering seamless integration with AWS. However, as a cloud-based container service built on a fully proprietary ecosystem, this offering has more potential for vendor lock-in.
Microsoft Azure Container Service (AKS) — Also new to the market, AKS is still establishing differentiated value. Like EKS, only time will tell how well the service adapts to the market and develops their offering. If you are already using Microsoft’s Azure services, AKS is a good offering to evaluate and consider. Given how new both EKS and AKS are, it is difficult to compare them with GKE, which has an obvious lead in this space. Expect Microsoft to make a big push with this service — it will be one to watch going forward.
Navigating the Open Seas
Kubernetes has proven itself to be a robust and reliable technology that will increase the agility and efficiency of your organization. Its installation and management can be complex, but there are tools to help. ‘Vanilla’ deployments offer flexibility, but the operational intricacy can be overwhelming for smaller, more inexperienced teams that don’t have to operate too many clusters. Distributions manage platform architectures and dependencies by prescribing deployments, which makes it faster for developers to push application code to source control repos. Managed services assume total responsibility for the operation of the Kubernetes management layer enabling developers to very quickly develop, deploy, and scale cloud applications with on-demand clusters. The process is easier but the flexibility limited and there’s a strong risk of vendor lock-in. Think about your organization’s requirements when choosing a solution for Kubernetes