Kubernetes is an orchestrator aiming, among other things, to automate the management of your applications. It provides a lot of features out of the box to do so but still has some lackings that can be filled with numerous tools and plugins.
Some of those tools can help you achieve a fully automated integration, while others will package powerful features. They arrive in many forms, from a plugin directly integrated into Kubernetes to external services connected to your cluster. Every new tool means more complexity and learning cost, so choosing wisely the ones that will bring the most value is critical.
This article aims to present some tools broadly used by the community, and showcase a fully automated integration of Kubernetes. If you recently decided to use Kubernetes and are still not sure how to integrate it, this article will give you some ideas.
Automate continuous deployment with external services
One of the strengths of Kubernetes is its ability to automatically manage and heal applications. However, when using Kubernetes out of the box, the deployment process still requires manual intervention, or custom deployment pipelines. Fortunately, Integrating a few external services can make those deployments drastically easier.
Helm: a package manager for Kubernetes
The first challenge when using Kubernetes is to keep track of your resource definition files. In most cases, each environment will require a distinct configuration for the same resources, which may cause code duplication or complex processes. On top of that, you may have multiple projects using the same component, and re-using a component without duplication can be hard with Kubernetes. With these constraints in mind, keeping track of those resources files requires a whole organization.
Helm proposes a solution to that, acting as a package manager. The equivalent of a package in Helm context is named a Chart and defines a set of Kubernetes resources to start on the cluster when installed. Those Charts are stored as tarball archives, and made available through a Repository, which needs no more than an HTTP server but is compatible with well-known solutions such as Artifactory.
When integrating Helm into your infrastructure, you can use it to package your own services or use Charts that were designed by the community. The latter is made possible through the concept of Values, which typically are configurations that can be injected during the installation. Through a Chart, the resources become re-usable between projects and configurable on an environment basis.
Managing the deployments from afar with ArgoCD
The second organisational complexity comes from the fact that deploying on Kubernetes is manual by default. Avoiding this would require a deployment pipeline on each application. On top of that, there is no control of the cluster configuration once delivered. If you modify your configuration on the cluster, it will stay out of synchronization with your code until deployed again.
ArgoCD is designed to fill this gap by pushing even further the GitOps concept within Kubernetes. The service is designed to manage your cluster through a declarative source of truth (e.g., a git repository). It will ensure at any time that your cluster is correctly configured by comparing this source of truth to the current cluster state. Any configuration manually changed on the cluster will be immediately detected and overwritten at will, be it automatic or on-demand.
With this, your whole Kubernetes cluster is contained within a few files. Newcomers to the project will easily understand the architecture, and most probably be able to deploy without fear of not knowing a few edge cases of a custom pipeline.
Integration example using both helm & ArgoCD
The two previous services work very well together and provide a reproducible and self-maintained way of managing your cluster with very few custom deployment pipelines. The following diagram proposes a possible integration between them.
In this integration, the Service will be built and packaged as a Helm Chart by the CI/CD pipelines, then the Chart uploaded to a Helm Repository. These Charts can be referenced in the Configuration repository, which ArgoCD will use as a source of truth to deploy the Charts on the cluster.
With this architecture, deploying an application to Kubernetes is done by committing the sources of the application on a git repository, then incrementing a version number on another repository. Everything else is taken care of automatically.
Improve self-maintainability through plugins
We saw how to build around Kubernetes to fully automate deployments. But there are also plugins that can enhance your experience on Kubernetes with improved stability, security, and automation.
Manage all kinds of applications using operators
Kubernetes is optimized to handle stateless applications, as every instance of such application will be independent. You can deploy stateful applications using a resource of kind StatefulSet, but the behaviour may be inconsistent if the application requires specific maintenance operations. For instance, adding a node on a database may not be as trivial as adding one on a stateless application.
This issue can be solved using the Operator Pattern. This pattern consists in adding a controller to the cluster, which will monitor the states of specific resources and react to the state using the Kubernetes control plane. The controller can thus inject the required maintenance operations at each step of the lifecycle of a specific technology.
You can find a more in-depth presentation of the Operator pattern in the blog Kubernetes Operators Explained, by Piotr Perzyna.
Enhance your cluster security with Istio
Once your services are deployed, you may want to have optimized and secure communication between them. A Service Mesh can help you achieve this by actively collecting data from each service and making routing decisions from all those data.
Such a concept can be complex and cumbersome to implement. Fortunately, there is an operator available for that on Kubernetes: Istio. It can be easily installed on your cluster to almost automatically handle everything.
Once installed, Istio will create a sidecar container inside each marked pod. This container will intercept all requests done to the original container, then use this data to dynamically change the behaviour, for instance with a better load balancing. This use of a proxy container means that the whole feature is available without requiring any change to the code, and no technology restriction.
Kubernetes' core concepts are to self-manage and self-heal deployed applications. While it does that very well, there is still a lot of automation that can be added. During this article, we saw a few optimizations using third-party tools in Kubernetes:
Reproducible build using Helm
Automated deployment using ArgoCD
Enhanced stateful application management, and more, using Operators
Secure and optimized cross-service communication using Istio
Bear in mind that the Kubernetes ecosystem is huge, and there are many additional tools that you can use to improve your daily use of Kubernetes. Digging into those tools will take some time, but if you find the right feature covered by an operator or a third-party service, it may save you much more.
Watch Kubernetes: The Documentary!