{{announcement.body}}
{{announcement.title}}

Running Nanos Unikernels on ESX

DZone 's Guide to

Running Nanos Unikernels on ESX

This set of examples below should work on both vSphere 6.7 and 7.0. We haven't tested out other versions yet but if you do let us know!

· DevOps Zone ·
Free Resource

Did you know you can run unikernels on ESXi? If you've worked with unikernels before you're probably aware of this but during these past few months we got Nanos unikernels running there. We've had Xen support for over a year now because the older AWS instances utilized that and we've had KVM support since day one because that's our main go-to hypervisor.

We already run certain workloads like go and rust webservers 2x as fast on Google Cloud and since many companies still have very large on-premise installations we wanted to add VMWare VSphere support.

This set of examples below should work on both vSphere 6.7 and 7.0. We haven't tested out other versions yet but if you do let us know! It also works in a nested virtualization situation such as running vSphere under Fusion. That's a great way to test things out without having to take over an entire box - it works well on my ~5-year-old mac.

First things first - go install OPS. If this is your first encounter with unikernels or OPS you probably will want to check out this tutorial first.

Create an Image

The first thing you'll want to do is create an image. The image is exactly what it sounds like - a disk image except this one isn't running Linux. It is simply running only your application - so instead of starting a bunch of services and going to your init manager, it goes straight to your program instead.

You'll want to set the GOVC environment variables which will at least include the username, password, and URL but could also include the datacenter. Also, at this time we haven't included TLS support for the client yet but that should be trivial to add for those who are interested - let us know if so. The underlying library being used has a rather extensive SOAP-based API so most of these commands are very comparable to their govc CLI counterparts:

Shell
 




x


1
export GOVC_INSECURE=1
2
export GOVC_URL="login:pass@host:port"
3
 
          
4
GOOS=linux go build -o gtest
5
 
          
6
ops image create -c config.json -t vsphere -a gtest



This will convert your image to a monolithic flat vmdk which is composed of two parts. We then upload it and create a copy of it to use it.

Create an Instance

After you create a disk image we'll create an instance, so you'll need to set the resource pool. If you don't know what it is you can find it on the main host screen:

localhost

Shell
 




xxxxxxxxxx
1


1
export GOVC_INSECURE=1
2
export GOVC_URL="login:pass@host:port"
3
export GOVC_RESOURCE_POOL="/ha-datacenter/host/localhost.localdomain/Resources"
4
 
          
5
ops instance create -t vsphere -i gtest



Also, when passing in the login and password through the GOVC_URL env var ensure that they are URL-encoded if they have special non-URL safe characters.

Grab a List of Instances

Shell
 




xxxxxxxxxx
1


 
1
export GOVC_INSECURE=1
2
export GOVC_URL="login:pass@host:port"
3
 
          
4
ops instance list -t vsphere



This next command allows you to see the list of unikernels you've deployed. The first time you run this command OPS will try and set the "Guest IP Hack". This hack essentially allows for arp translation to happen so we know what IP your instance is listening on. There are other ways of achieving this but we found this to be rather straight-forward.

Yanking Out Logs

The logs are piped from a serial adapter out to a log file and you can obtain them like so:

Shell
 




xxxxxxxxxx
1


1
export GOVC_INSECURE=1
2
export GOVC_URL="login:pass@host:port"
3
 
          
4
ops instance logs -t vsphere gtest



This by default will yank the last 100 lines of output. For a real production install you'd probably ship these out over Syslog to something like paper trail or some other logging solution.

Use Cases

One of the more interesting things we've found with deploying unikernels to vSphere is that a lot of companies with existing deployments have so-called "brown field" software. This typically involves lots of legacy software that a company might not have written themselves but still have to support. You know ad-hoc products that were written on tomcat or bind a decade ago - that sort of thing. Activities such as patching every single time a new vulnerability comes out or re-provisioning a server with the same sort of setup that it had before disaster strikes are common routines.

Modern-day configuration management software such as terraform or chef/puppet aren't as heavily used in these environments so the ability to re-deploy a service instantly and not have to screw with provisioning an entire system each time is a huge relief to some sysadmins, as is the knowledge that the only thing that is getting provisioned is the program in question and not a billion other programs that you may or may not want or trust.

If you are a vSphere power user let us know what you think and what should be improved and if you are a developer we take pull requests!

Topics:
devops, unikernel, vsphere

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}