Kubernetes is designed for automation. It comes with lots of built-in features that help with deploying and running workloads, which can be customized with the help of controllers. Node operators are clients of the Kubernetes API that act as controllers for a custom resource.
This tutorial breaks down the concept of Kubernetes node operators. It reviews what they are, why and when they are needed, and the advantages of using them. It also covers best practices for building operators, and to crown it all, it provides a step-by-step guide that walks through creating a node operator.
Before we proceed any further, however, let’s quickly explore some important Kubernetes components that we might come across as we go through this article. My hope is that at the end of the day, this would be a one-stop guide for building a Kubernetes node operator.
Inasmuch as this tutorial is not intended for Kubernetes beginners, we should have at least a basic knowledge of:
For test purposes, we can use Minikube, a tool that makes it easy to run Kubernetes locally. See here for steps on running and installing Minikube, and here for installing kubectl. Also see instructions for downloading Go here. Lastly, follow these steps to learn about Docker and its installation.
Node operators are applications that take advantage of Kubernetes’ ability to deliver the automation advantages of cloud services. They can package, deploy, and manage applications from start to finish. These applications can not only be deployed on the platform, but can also function in other cloud servers where Kubernetes can run, e.g., EKS, GKE, etc.
In essence, node operators provide application-specific automation with Kubernetes. In its simplest form, an operator adds an endpoint to the Kubernetes API server, called a custom resource (CR).
This comes with a control plane component that monitors and maintains the custom resources as they are created. These operators can then act based on the state of the resources.
These are the principles of Kubernetes that every operator is built upon. They include:
CRs are an extension of the Kubernetes API that are built for individual use. They are not always available in a default Kubernetes installation, unlike other built-in resources. Per the docs:
“They represent a customization of a particular Kubernetes installation … making Kubernetes more modular.”
CRs are dynamic and can be updated independently of the cluster itself. Once the CR is installed, users can create and access its objects using kubectl, just as we can do for built-in resources like pods, deployments, and so on.
Note: CRs are defined using the
CustomResourceDefinition
API.
When we combine a custom resource with a custom controller, it provides a true declarative API. This allows us to declare or specify the desired state of a resource and keep the current state of Kubernetes objects in sync with the desired state.
Operators can perform automation tasks on behalf of the infrastructure engineer/developer. As a result, there are a number of scenarios in which a node operator can be used.
For example, node operators come in handy when defining custom applications like Spark, Cassandra, Airflow, Zookeeper, etc. These might need a lot of microservices to manage their lifecycle, whereas we can deploy instances of these applications using operators, making them easier to manage
They’re also useful for stateful applications such as databases. Some of these stateful applications have pre-provisioning and post-provisioning steps that can easily lead to errors, which can be curtailed by automating with operators.
Other use cases might include:
do not allow
some podsIf there isn’t an operator in the ecosystem that implements the desired behavior for an application, we can code our own through a wide variety of methods. However, this section will dwell on the Operator SDK.
The Operator SDK was originally written by CoreOS and is now maintained by Red Hat. It is one of the easiest and most straightforward ways to build an operator without extreme knowledge of Kubernetes API complexities.
Other methods include ClientGo, which is a Go client that connects with the Kubernetes API. However, using this client to build an operator requires a working knowledge of the Go programming language.
Kube Builder is another option. This is a part of the Kubernetes Special Interest Groups (SIGs), responsible for building apps that operate within Kubernetes. It is also written in Go and uses the controller-runtime — hence, it allows communication with the Kubernetes API.
There are multiple ways of installing the Operator SDK, two of which we’ll highlight here. This first is by installing through the operator binary directly. We can do so by fetching the latest version of the Operator SDK from the Operator framework by running:
$ wget https://github.com/operator-framework/operator-sdk/releases/download/v0.15.2/operator-sdk-v0.15.2-x86_64-linux-gnu
The next step is to move the downloaded operator to an executable path by running:
$ sudo mv operator-sdk-v0.15.2-x86_64-linux-gnu /usr/local/bin/operator-sdk
Then, we can proceed to make it executable by running:
$ sudo chmod +x /usr/local/bin/operator-sdk
An alternative method is by cloning the SDK from the GitHub repository where it is hosted and installing from there. To do so, we can make a directory on the Go path (the path where Go is installed) for the Operator framework:
$ mkdir -p $GOPATH/src/github.com/operator-framework
We then navigate into that path by running:
$ cd $GOPATH/src/github.com/operator-framework
Now, we can proceed to clone the Operator framework repository into the directory we just created, by running the following set of commands:
$ git clone https://github.com/operator-framework/operator-sdk $ cd operator-sdk $ git checkout v0.4.0 $ make dep $ make install
The operator-sdk
command bootstraps the operator. An example is shown below:
$ operator-sdk new sample-operator $ cd sample-operator
The project structure generated from running the above command looks like this:
├── Gopkg.lock ├── Gopkg.toml ├── build │ └── Dockerfile ├── cmd │ └── manager │ └── main.go ├── deploy │ ├── operator.yaml │ ├── role.yaml │ ├── role_binding.yaml │ └── service_account.yaml ├── pkg │ ├── apis │ │ └── apis.go │ └── controller │ └── controller.go └── version └── version.go
Next is to generate some code that would represent the CR Definitions of the project, i.e., the custom resource (API) and the custom controller. To do so, we can run the commands below:
$ operator-sdk add api --api-version=sample-operator.example.com/v1alpha1 --kind=App $ operator-sdk add controller --api-version=sample-operator.example.com/v1alpha1 --kind=App
This command specifies that the CRD will be called App
. This creates the pkg/apis/app/v1alpha1/app_types.go
file for us. This file can be modified to add extra parameters.
Note: We can also run the following command to generate the CRD:
$ operator-sdk generate crds $ operator-sdk generate k8s
This generates a new set of YAML files and Go code appended to the tree above.
Note that the deploy/crds/sample-operator_v1alpha1_app_crd.yaml
file contains the custom resource definition while, deploy/crds/sample-operator_v1alpha1_app_cr.yaml
file contains the custom resource.
Note: We can install the CRD on the Kubernetes cluster by running:
kubectl apply -f deploy/crds/sample-operator_v1alpha1_app_crd.yaml
The operator at this point runs what is known as a “reconcile loop.” All this does is to call a reconcile function that makes sure a piece of code is triggered every time a CR is created from the CR definition we defined above.
The pkg/controller/app/app_controller.go
controller file contains the controller logic and the reconcile function. It also contains sample code that creates a pod, which we can adjust to fit our needs.
During the reconcile process, the controller fetches the app resource in the current namespace and compares the value of its replica field (i.e., the desired number of pods to run) with the actual number of pods running.
This compares and ensures that the desired number of pods matches the available number of active pods. An example of modifying the controller logic is changing the appSpec
Go struct by adding the field to store the number of replicas, i.e., in the pkg/apis/sample-operator/v1alpha1/app_types.go
file.
Type appSpec struct { Replicas int32 `json:"replicas"` }
Note: There is no limit to the number of modifications that can be made to this file, as it is highly customizable.
Remember to always run an operator-sdk generate k8s
command after making changes to the controller structure as this updates the API package file, which is pkg/apis/app/v1alpha1/zz_generated.deepcopy.go
.
Before deploying the operator, we can test it on our local machine, outside the cluster. To do so, firstly, we start the cluster by running the following command:
$ operator-sdk run local
Next, we can test our sample application by running:
$ kubectl apply -f <(echo " apiVersion: sample-operator.example.com/v1alpha1 kind: app metadata: name: test-app spec: replicas: 3 ")
Note: This would spin up three pods, as defined in the controller logic.
$ kubectl get pods -l app=test-app NAME READY STATUS RESTARTS AGE test-app-podc2ckn 1/1 Running 0 103s test-app-podhg56f 1/1 Running 0 103s test-app-pod12efd 1/1 Running 0 103s
Once we are convinced the operator works as expected and other kubectl commands (create
, describe
, edit
) can be run against our CR successfully, our next step is to deploy the cluster.
To publish the operator, we need a Docker container image easily accessible by the Kubernetes cluster. We push the image to any container registry. Note that in this tutorial, we are making use of Quay.io.
Next is building and publishing to the registry by running these commands:
$ operator-sdk build quay.io/<username>/sample-operator $ docker push quay.io/<username>/sample-operator
Now update the deploy/operator.yml
file to point to the new Docker image on the registry. We do so by running the following command:
$ sed -i 's|REPLACE_IMAGE|quay.io/<username>/sample-operator|g' deploy/operator.yaml
kubectl install
is enough to deploy the operatorNode operators are meant to simplify the process of extending Kubernetes, and as we have seen, they are quite easy to integrate and build.
Among their numerous benefits, they ease automation, which allows us to easily deploy cloud-native applications (collections of small, independent, loosely coupled services) anywhere and manage them exactly as we want to.
Again, hope this helps in quickly getting started in building your own Kubernetes operator. Want to find or share operators? Check out OperatorHub.io for more detailed information.
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowPlaywright is a popular framework for automating and testing web applications across multiple browsers in JavaScript, Python, Java, and C#. […]
Matcha, a famous green tea, is known for its stress-reducing benefits. I wouldn’t claim that this tea necessarily inspired the […]
Backdrop and background have similar meanings, as they both refer to the area behind something. The main difference is that […]
AI tools like IBM API Connect and Postbot can streamline writing and executing API tests and guard against AI hallucinations or other complications.