I Just Installed Kubernetes on My Workstation – Now What?
Installing Kubernetes on a workstation has become pretty easy. Once installed though, what does it take to become a useful and productive tool?
Join the DZone community and get the full member experience.Join For Free
Let me start with the obligatory phrase, “Kubernetes is really catching on…” As I wrote this article, I spent a little time figuring out where Kubernetes is sitting on the hype curve.
NeedCokeNow - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=27546041
I think Kubernetes may be a bit unique in that it seems to be iterating through the hype curve. With each new use case or capability, while it may not wind up at the beginning of the curve, it certainly seems to head back to the “First-generation…lots of customization needed” phase moving, often times quickly, through the rest of the curve. Needless to say, Kubernetes gets plenty of attention from just about every source of technical information.
Whether you’re surfing on the peak of the Kubernetes ecosystem wave, or you’re regularly interacting with a Kubernetes infrastructure, you’ve likely found the need to kick the tires on your own Kubernetes cluster.
Fortunately, you have several options; minikube, MicroK8S and Docker for Mac/Windows are popular options for workstations. All cloud providers offer some type of Kubernetes service allowing you to explore the technology. In fact, getting your hands on a test/development Kubernetes cluster is pretty easy to do, and I’m sure that many, if not most, of you, have at least fired up your own Kubernetes cluster.
You’ve made a Kubernetes cluster provider selection, installed your own personal, Kubernetes cluster to play with. Now what? This reminded of when my grandfather, as a gift in 1978, bought me a TRS-80 Model I (which I still have). He knew I loved computers and wanted work in the field. I remember unpacking it, setting it up, and booting it. My grandfather then said, “Great! Make it do something.” For a brief moment, my joy turned to anxiety as I had no idea what to do with it. By then, I had already been coding for a couple of years, but I had never really done anything with a microcomputer. I don’t recall my response, but I’m sure it was some incoherent statement that I needed to do this and that before it could “Make it do something.”
Have you felt like that with your newly installed, personal Kubernetes cluster? You probably don’t have your grandfather asking you to make it do something, but after you’ve typed a few kubectl commands, you’re probably wondering, “Now what do I do?”
Dave Jones - EEVblog, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=34239424
Most Kubernetes cluster providers offer convenient methods for installing useful tools such as the Kubernetes Dashboard. You might have discovered a world of easily installable components and applications using Helm. Working through these leads to fun learning experience.
Why build a local deployment environment?
Uses for Kubernetes cut across many segments of the technology industry. Operations and DevOps teams look to improve multiple aspects of deployments including continuous delivery pipelines. Business management looks at deployment portability to prevent vendor lock-in. Application developers can focus on developing and delivering to a consistent deployment and runtime environment.
Regardless of perspective or responsibility, to those with “hands on keyboards” Kubernetes offers an opportunity well beyond anything that has previously existed. Deployment environments are often complex. Facsimiles of production environments are difficult to reproduce. Kubernetes allows everyone in the deployment pipeline from developers, QA, DevOps and infrastructure teams to easily build, tear down and rebuild production or production-like environments. It doesn’t matter if an organization hosts its own systems, uses a cloud provider, or works in a hybrid environment, from the application-deployment perspective, Kubernetes provides a consistent abstraction over varied deployment topologies.
Kubernetes allows everyone in the deployment pipeline from developers, QA, DevOps and infrastructure teams to easily build, tear down and rebuild production or production-like environments.
Making the Cluster “Do Something”
As mentioned, getting various components installed in a Kubernetes cluster is a straightforward process. However, getting to the point of building a production-like environment in a reproducible fashion can be a bit tedious. Yes. All those components are straightforward to install. At the same time, most of them have a myriad of configuration options. Once you have determined the options suitable for your use, no matter the deployment mechanism, you’ll end up with one or more configuration files per component.
One of the great things about having your own personal Kubernetes environment is that after you mess something up in the cluster (which you will), you can simply reinstall or reset the cluster. The component can be reinstalled using the configuration files customized to your environment. It really is awesome.
Whether building a cluster for your own personal use or within your organization, you will eventually define the components that are to be available in your production cluster. Your organization may have already specified them. Keep in mind that I’m not considering the applications that will be installed in your production cluster. An upcoming article will discuss that. Here I’m discussing the components that will support the Kubernetes cluster and the applications that will be installed and execute in the cluster.
Regardless of your situation, you’ll want to be able to build, rebuild, tweak, develop against, etc. the defined configuration. This should lead you to start considering how you are going to establish a pattern by which you can install, configure, and manage your cluster.
Regardless of your situation, you’ll want to be able to build, rebuild, tweak, develop against, etc. the defined configuration. This should lead you to start considering how you are going to establish a pattern by which you can install, configure, and manage your cluster. Using the configuration files created when you built your cluster, you could build a simple shell script to build and configure your cluster in a repeatable manner.
Taking the Next Step
Depending on the number of components you add to your cluster, you will find that maintaining shell scripts getting cumbersome. Details such as version compatibility, ingress configuration, DNS, storage, and so on, while not difficult to manage as single entities, get more involved as more moving parts are added to your cluster. For example, consider how you want to access the cluster. You could use IP address, but that will get problematic, especially on a personal workstation. So, you decide to use a host name. Now, all components needing to know the name used to access the cluster (e.g. ingresses, Docker registry) must be configured accordingly. This takes your shell script to the next level, introducing things like shared variables. I’ve been here. The more complete (not complex) your configuration becomes, the more involved your shell scripts will become – I guarantee it. I can also state from personal experience that the shell scripts start to become difficult to maintain.
Tooling to the Rescue
Fortunately, a lot of smart people have already been here on other similar configuration topics. Tools such as Terraform, Puppet, Chef and Ansible have been, in one way or another, designed to configure and manage compute resources.
This was the next stop on my journey to a maintainable, automated local cluster build and configuration process.
This was the next stop on my journey to a maintainable, automated local cluster build and configuration process. All these tools have merit. I ended up choosing Ansible. It appeared to best fit my needs. With a true configuration tool in hand, developing and maintaining these types of scripts becomes an exercise in following the best practices of the tool. You might be thinking, “Isn’t moving to a tool like Ansible overkill?” Not really. After several years of building local Kubernetes clusters, trying to maintain this kind of work with shell scripts becomes a frustrating endeavor.
So, What’s an Example of a Useful Cluster Configuration?
That’s a pretty loaded question. There are really many answers. To state the obvious, it’s based on your specific situation. However, after working on multiple projects, I have learned to depend on a handful of key components. Over the years, as technology and standards evolve, the specifics have changed, but there are a core set of features that I think are a part of a Kubernetes deployment:
- A Dashboard view into the cluster
- Some type of cluster ingress
- Publishing to and retrieving from a Docker registry
- Publishing to and retrieve from a Helm repository
- Metrics gathering and reporting
- Logging and log file management
Regardless of your situation, one of the reasons to choose a tool such as Ansible is that as your need for additional or alternative components arise, adding them to well-defined deployment tooling becomes an easy thing to do.
A Sample Project
Enough of the conversation. I have developed a project that follows the principles outlined here. It’s available on GitHub at https://github.com/relenteny/provision-k8s. Currently, the project supports Docker for Mac and Docker for Windows. An additional prerequisite for Windows is that the script must be executed from WSL 2. Detailed installation instructions are provided in the project’s README file. Before executing the provisioning script, while there aren’t too many prerequisites, I do advise that you read the README thoroughly.
Anyone who has dealt with local workstation environments, knows that, when you install a complex piece of software, anything can happen. Based on years of experience dealing with environmental issues across those environments, the approach taken here is to provision a Kubernetes cluster using a prebuilt Docker image. Doing this provides a more stable environment from which a cluster may be configured. It's not foolproof, but it significantly decreases the challenges faced when trying to provide this type of functionality across a myriad of workstation configurations on multiple operating systems. So, once you meet the Docker for Mac/Windows requirements outlined in the README, you’re all set to go.
If you’re looking to develop/test against more than just a base install of Kubernetes, I hope you give this a try. Feedback on the process is more than welcome. Next, I’ll take a look at building and deploying an application using a Kubernetes cluster provisioned using this process. Stay tuned!
Published at DZone with permission of Ray Elenteny. See the original article here.
Opinions expressed by DZone contributors are their own.