KubeVirt in Action
KubeVirt in Action
In this article, we will see how to deploy this in an Openshift Container Platform (OCP) 4.2 and get your first windows VM up and running inside OCP.
Join the DZone community and get the full member experience.Join For Free
KubeVirt is an open-source project that offers you a VM-based virtualization option on top of any Kubernetes cluster. Lots of work happened in this area in the last couple of years, and Kubevirt is reaching the maturity stage to become production-ready. In this article, we will see how to deploy this in Openshift Container Platform (OCP) 4.2 and get your first windows VM up and running inside OCP.
- Up and running OCP 4.2 cluster.
- KubeVirt V0.24.0.
- Windows 2012 R2 ISO download.
- Containerize Data Importer (CDI) operator for ISO image upload.
- virtctl for operating on VM.
To run a windows-based VM using Windows ISO file, the following 8 steps are required:
Image upload using Virtctl.
Create a PV for hardisk that will hold the windows installation.
Create a Windows VM using a sample yaml file.
Start a VM using virtctl.
Connect to VM using VNC.
1. Configure CDI
CDI is required to upload an image to a PVC. CDI can be configured by running the following commands:
Choose an appropriate version. V0.24.0 is given as an example, as this is the latest version. Change this to the latest version at the time of using it:
Apply kubevirt scc for openshift:
If you are having rook-ceph, then apply the following file as well:
3. Image Upload Using Virtctl
Install virtctl using the following commands. Once again, use the latest version:
Now, upload this image using the following command. Give the upload proxy IP that you get from
Remember to run this command from the Master node.
This command will create 2 PVCs, win2k12-pvc, and win2k12-pvc-scratch of the same size (25Gi), as given in the following command. Scratch PVC is temporary, and it will be deleted automatically after a successful image upload.
4.Create PV for Hardisk That Will Hold the Windows Installation
Put the following code in one .yaml file and apply it using
oc apply –f <filename>.
Remember to update
storageClassName to the appropriate value. You can update this based on the output of
oc get storageclass output.
rook-filesystem should be used if
rook-ceph is in place.
5. Create a Windows VM Using Sample YAML File
Now, it is time to create a VM using the ISO image that was uploaded earlier. Before this step, you need to attach a virtio driver as a cdrom. You can do this using podman/docker with the following command:
podman pull kubevirt/virtio-container-disk
Save the above code in a yaml file and then run it with the following command:
oc create –f <filename.yaml>
6. Start VM Using virtctl
By default theVM is in shutdown mode, so the first thing you need to do is to start it.
This VM will create the VMI, and you will see the VMI running in a few minutes.
7.Connect to VM Using VNC
Now, it is time to connect to VM using VNC. This command needs to be executed from a host that is capable of showing a display. You can use MobaXTerm or any other such software.
[root@localhost tmp]# ./virtctl vnc samplevm
If it fails with an error, like "remote_viewer not present", then install remote_viewer using the following command. If not, then you will see the Windows installation screen, as shown below:
[root@localhost tmp]# yum install virt-viewer
8. Install Windows
Complete the Windows installation by following this video. It covers the steps required for using virtio drivers.
Remember, your mouse pointer won’t work for most screens, so you need to use keys like, tab (to toggle between options), spacebar (for checkbox selection), enter (for selection), right arrow key (for expansion), etc.
Congratulations!!! Your windows VM is up and running now.
Running VMs in the same Kubernetes-based container environment provides several benefits, like single pane of glass for both VMs and Containers, intermediate platform for eventual containerization, and a platform for hosting VM based Apps that can't be converted into microservices. Openshift's feature, called Container Native Virtualization (CNV) is in tech preview, and this feature automates the entire process that you can run from Openshift console. After GA of this feature, VM-based workload hosting on Openshift will be much easier.
Putting VM inside a POD results in nested virtualization, so it has some performance overheads. Currently, several features are a work in progress. For example, you can’t increase CPU/Memory on the fly, but pretty soon, all such limitations/challenges will be a thing of the past.
Disclaimer:- These are my personal views and not related to my employer's view on this subject.
Opinions expressed by DZone contributors are their own.