Deploying and Validating Kata Containers
This blog provides a step-by-step guide to installing and validating Kata Containers on IBM Cloud, covering essential configurations and hands-on deployment.
Join the DZone community and get the full member experience.
Join For FreeIn Part 1, we explored how Kata Containers bring together the agility of Kubernetes pods and the robust security of virtual machines, offering a powerful solution for cloud-native workloads.
In this part, we’ll provide a hands-on guide to deploying and validating Kata Containers in IBM Cloud, a platform well-suited for secure, scalable, and enterprise-grade cloud solutions. From configuring the runtime to testing its performance and security, we’ll walk you through the entire process. By the end of this guide, you’ll have the practical knowledge to integrate Kata Containers into your IBM Cloud environment and enhance the security of your containerized applications.
Let’s turn theory into practice and unlock the potential of Kata Containers in IBM Cloud!
Prerequisites
Before starting, ensure you have the following:
- An IBM Cloud Kubernetes Service (IKS) cluster running in VPC
- Kubernetes CLI (
kubectl
) configured for your cluster - Basic familiarity with Kubernetes YAML configurations
For this exercise, we’ve considered a simple cluster configuration:
- Number of nodes: 3
- Node flavor: bx2.4x16
Step 1: Install Kata Containers on the Cluster
To deploy Kata Containers, you’ll use the Kata Deploy daemonset, which automates the installation process. Run the following command to deploy it:
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
This daemonset will install Kata runtime configurations across all worker nodes in your cluster.
Step 2: Create the KataConfig Custom Resource
Define a KataConfig Custom Resource (CR) to manage the deployment of Kata runtimes on the cluster.
Use the following YAML file:
kind: KataConfig
metadata:
name: kataconfig
spec:
runtimeClass: kata-qemu
Apply this configuration with:
kubectl apply -f kataconfig.yaml
This CR automates the setup of the Kata runtime on all worker nodes.
Step 3: Deploy a Sample Workload
To test the integration of Kata Containers, deploy a sample workload. Use the following example deployment YAML:
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
Step 4: Validate the Deployment
Check the status of the running pods to ensure the workload is deployed successfully:
kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGE
php-apache-kata-qemu-78ccf8958f-nbn6d 1/1 Running 0 46h
Inspect the pod details to confirm it is using the kata-qemu
runtime:
kubectl describe pod php-apache-kata-qemu-78ccf8958f-nbn6d
Key details to verify:
- Runtime class name:
kata-qemu
- Node selector: katacontainers.io/kata-runtime=true
Step 5: Verify the Underlying VM
To ensure the container is running inside a lightweight VM, log into one of your cluster nodes and check the running processes:
ps -eaf | grep "kvm"
You should see a process corresponding to the Kata Container VM, such as the following:
Command:
ps -eaf | grep "kvm"
Output:
root 49189 49179 0 Dec03 ? 00:02:57 /var/opt/kata/bin/qemu-system-x86_64 \
-name sandbox-6afa9c35ac34dd2e7364ddae072358233b57efa390b5c9f2b6b30986be48e686 \
-uuid 7e89b75a-6dab-4d21-80da-f7d86ee3d4f7 \
-machine q35,accel=kvm,nvdimm=on \
-cpu host,pmu=off \
-qmp unix:fd=3,server=on,wait=off \
-m 2048M,slots=10,maxmem=17040M \
-device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m \
-device virtio-serial-pci,disable-modern=true,id=serial0 \
-device virtconsole,chardev=charconsole0,id=console0 \
-chardev socket,id=charconsole0,path=/run/vc/vm/6afa9c35ac34dd2e7364ddae072358233b57efa390b5c9f2b6b30986be48e686/console.sock,server=on,wait=off \
-device nvdimm,id=nv0,memdev=mem0,unarmed=on \
-object memory-backend-file,id=mem0,mem-path=/var/opt/kata/share/kata-containers/kata-ubuntu-latest.image,size=268435456,readonly=on \
-device virtio-scsi-pci,id=scsi0,disable-modern=true \
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \
-device vhost-vsock-pci,disable-modern=true,vhostfd=4,id=vsock-2212957142,guest-cid=2212957142 \
-chardev socket,id=char-c824ec5354a6e25f,path=/run/vc/vm/6afa9c35ac34dd2e7364ddae072358233b57efa390b5c9f2b6b30986be48e686/vhost-fs.sock \
-device vhost-user-fs-pci,chardev=char-c824ec5354a6e25f,tag=kataShared,queue-size=1024 \
-netdev tap,id=network-0,vhost=on,vhostfds=5,fds=6 \
-device driver=virtio-net-pci,netdev=network-0,mac=d6:4a:fd:51:f7:06,disable-modern=true,mq=on,vectors=4 \
-rtc base=utc,driftfix=slew,clock=host \
-global kvm-pit.lost_tick_policy=discard \
-vga none \
-no-user-config \
-nodefaults \
-nographic \
--no-reboot \
-object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on \
-numa node,memdev=dimm1 \
-kernel /var/opt/kata/share/kata-containers/vmlinux-6.1.62-140 \
-append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 console=hvc0 console=hvc1 quiet systemd.show_status=false panic=1 nr_cpus=4 selinux=0 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none \
-pidfile /run/vc/vm/6afa9c35ac34dd2e7364ddae072358233b57efa390b5c9f2b6b30986be48e686/pid \
-smp 1,cores=1,threads=1,sockets=4,maxcpus=4
root 49197 2 0 Dec03 ? 00:00:00 [kvm-nx-lpage-re]
root 49201 2 0 Dec03 ? 00:00:00 [kvm-pit/49189]
root 116536 116078 0 04:22 ? 00:00:00 grep --color=auto kvm
Conclusion
In this article, we explored the setup and deployment of Kata Containers on an IBM Cloud Kubernetes Service (IKS) cluster within a VPC. With its strong isolation and security capabilities, Kata Containers offer a reliable runtime solution for workloads that demand enhanced security or multi-tenancy.
Opinions expressed by DZone contributors are their own.
Comments