As cloud-native development continues to evolve, new tools and technologies emerge to meet the ever-changing demands of modern applications. One tool that’s been getting some attention is Krustlet, a kubelet implementation written in Rust. With its unique focus on Wasm workloads, Krustlet offers an intriguing proposition for Rust projects in the Kubernetes ecosystem.
In this article, we’ll demonstrate how to set up Krustlet and configure it to work with a Kubernetes cluster to run WebAssembly workloads.
Jump ahead:
Before getting started with Krustlet, make sure you have the following prerequisites in place:
Krustlet, which stands for Kubernetes Rust kubelet, is an open source project that provides a Kubernetes kubelet implemented in Rust. A kubelet is a node agent that works in the background to ensure containers are running as expected.
Kruslet functions as a node in the Kubernetes cluster, similar to how most kubelets operate. It communicates with the Kubernetes API server, receives scheduling instructions, and runs the assigned workloads. Krustlet’s differentiating feature is that it can handle Wasm workloads, making it ideal for edge computing scenarios.
Krustlet is designed to listen on the Kubernetes API’s event stream for new pod requests that match a specific set of node selectors. It schedules those workloads to run using a WASI-based runtime instead of a container-based runtime. This allows workloads to be written in any language that compiles to Wasm and run within the Kubernetes ecosystem, essentially treating Wasm as a first-class citizen.
Wasm is a binary instruction format designed as a portable target for compiling high-level languages such as Rust, C, and C++. It enables applications to run in a web browser at near-native speed, boosting performance over traditional JavaScript execution. However, the scope of WebAssembly extends beyond the browser.
WASI, the system interface for WebAssembly, aims to provide a standardized set of APIs for WebAssembly modules, enabling them to run in various environments, including Kubernetes through Krustlet. In the context of Krustlet, Wasm enables the execution of workloads in a Kubernetes cluster without requiring a full container runtime like Docker. This makes deployments lightweight and fast, enabling developers to take advantage of Rust’s robust type system and memory safety features.
To set up Krustlet, you’ll need a Kubernetes cluster and the Krustlet binaries. You can download the Krustlet binaries from the Krustlet GitHub repository or build them yourself:
$ curl -O https://krustlet.blob.core.windows.net/releases/krustlet-v1.0.0-alpha.1-macos-amd64.tar.gz
Once you’ve downloaded the right binary for your OS, you can proceed with the installation process by unpacking the downloaded file by running the following tar
command in your terminal:
$ tar -xzf krustlet-v1.0.0-alpha.1-macos-amd64.tar.gz
After unpacking, you‘ll find the Krustlet provider in the directory. To make it easily accessible, move it to a location within your system’s $PATH
. For example, on Unix-like systems, you can use the following command to move the provider to the /usr/local/bin/
folder:
$ sudo mv krustlet-wasi /usr/local/bin/
By moving the Krustlet provider to a location in your $PATH
, you’ll ensure that it can be executed from anywhere within your terminal.
After setting up Krustlet, you‘ll need to configure it to work with your Kubernetes cluster to run WebAssembly workloads. But before we get into that, let’s build a simple Rust project and publish it to Docker Hub using the wasi-to-oci tool.
To get started, create a new Rust project using the cargo new krustlet_demo
command and open it with your chosen text editor. Next, open the ./src/main.rs
file and add the following code simulating the work being done:
// Rust use std::thread; use std::time::Duration; fn main() { loop { // Perform your recurring task here println!("Performing recurring task..."); // Wait for a specific duration before performing the task again thread::sleep(Duration::from_secs(5)); } }
The above snippet contains a loop
that performs a recurring task every 5
seconds. The println!
statement represents the recurring task, but you can replace it with any custom logic or operations that need to be repeated regularly.
Before you compile the project, add the WASI target to your Rust toolchain:
$ rustup target add wasm32-wasi
This ensures that you’ll have the tools and dependencies to compile your Rust code for the WASI platform.
Now that you have the WASI target available, you can build your Rust project to generate the WebAssembly module. Execute the following command in the root directory of your Rust project:
$ cargo build --release --target wasm32-wasi
Once completed, Cargo will generate the WebAssembly module, with a .wasm
extension, in the target/wasm32-wasi/release/
directory.
Next, let’s explore the process for publishing the WebAssembly module to Docker Hub, enabling easy access and deployment for others interested in utilizing your Wasm-powered Rust project. Follow the below steps to publish your Wasm artifact to Docker Hub:
The wasm-to-oci
tool converts Wasm modules to the OCI format compatible with Docker. Visit the releases page and download the pre-built binary for your OS and add it to your path using the following commands:
$ curl -O https://github.com/engineerd/wasm-to-oci/releases/download/v0.1.2/darwin-amd64-wasm-to-oci $ mv darwin-amd64-wasm-to-oci wasm-to-oci $ chmod +x wasm-to-oci $ sudo cp wasm-to-oci /usr/local/bin
To publish your WASM artifact to a container registry, such as Docker Hub, you’ll need to log in to the registry using the Docker CLI or other available tools provided by the specific container registry. The wasm-to-oci
tool will use the credentials stored in your ~/.docker/config.json
file:
$ docker login
By logging in to your container registry, you’ll ensure that the wasm-to-oci
tool can access your registry and push the Wasm artifact without any authentication issues.
The wasm-to-oci push
command converts the Wasm module to OCI format and pushes it to Docker Hub using the specified repository and tag in this format: <dockerhub-username>/<repository-name>:<tag>
. Run the following command:
$ wasm-to-oci push ./target/wasm32-wasi/release/krustlet_demo.wasm docker.io/ikehakinyemi/krustlet-demo:latest
Next, visit your Docker Hub repository’s page and confirm that your Wasm artifact has been successfully pushed and is visible in the repository. Now, let’s see how to use minikube to deploy Krustlet.
First, let’s bootstrap Krustlet using some existing configurations from the Krustlet team. Start a local Kubernetes cluster using minikube with the following command:
$ minikube start
Next, verify that the cluster is actively running:
$ kubectl get nodes -o wide
To join the cluster with the appropriate permissions, Krustlet requires valid configurations with a valid token. For this setup, you’ll use the existing bash script to generate the kubeconfig
file and necessary token:
$ curl https://raw.githubusercontent.com/krustlet/krustlet/main/scripts/bootstrap.sh | /bin/bash
This creates a config file within the ${HOME}/.krustlet/config/bootstrap.conf
file. Once you’ve installed and executed the script, you can run the Krustlet’s WASI provider with the following command:
$ KUBECONFIG=~/.krustlet/config/kubeconfig \ krustlet-wasi \ --node-ip <GATEWAY> \ --node-name=krustlet \ --bootstrap-file=${HOME}/.krustlet/config/bootstrap.conf
Use the minikube ip
command to check the default gateway when you start minikube; replace the value for the placeholder <GATEWAY>
.
After starting Krustlet, you may encounter a prompt to manually approve TLS certificates. This is because the serving certificates used by Krustlet require manual approval. To proceed, open a new terminal and execute the following command:
$ kubectl certificate approve <hostname>-tls
The hostname will be displayed in the prompt generated by the Krustlet server. Keep the Krustlet server running despite any logged errors, as Krustlet is currently in beta and may have some rough edges that can be overlooked for now. This step is necessary only the first time you start Krustlet.
Now, let’s verify Krustlet’s functionality by writing and applying the following Wasm workload manifest. Create a workload.yaml
file and update it, like so:
apiVersion: v1 kind: Pod metadata: name: krustlet-demo spec: containers: - name: krustlet-demo image: docker.io/ikehakinyemi/krustlet-demo:latest tolerations: - key: "kubernetes.io/arch" operator: "Equal" value: "wasm32-wasi" effect: "NoExecute" - key: "kubernetes.io/arch" operator: "Equal" value: "wasm32-wasi" effect: "NoSchedule"
Here we’ve specified tolerations
to ensure that Kruslet is not scheduled on regular nodes, as well to replace the image value to point to the repository where you publish your artifact. Now, apply the manifest using the below kubectl
command:
$ kubectl apply -f workload.yaml
Next, confirm the status of the pod:
$ kubectl get pods NAME READY STATUS RESTARTS AGE krustlet-demo 1/1 Running 0 24s
Now you can inspect the logs. You should observe that Krustlet starts generating logs in its terminal window, providing updates on the scheduled workload:
$ kubectl logs krustlet-demo Performing recurring task... Performing recurring task... Performing recurring task...
With the above logs, your workload is successfully working as expected. As part of the local development process, you can clean up the used resources by destroying the cluster with the minikube delete
command.
Krustlet provides a compelling way for Rust developers to run WebAssembly workloads in a Kubernetes environment. It extends the versatility of Kubernetes, allowing it to handle more than just container-based applications.
For developers invested in Rust and WebAssembly, Krustlet provides an intuitive and efficient way to integrate these technologies into the Kubernetes ecosystem. For more information and to explore further, please refer to the official Krustlet documentation and the Rust and WebAssembly documentation.
Debugging Rust applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking the performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your Rust application. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.
Modernize how you debug your Rust apps — start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]