Kubernetes: Getting Started
Kubernetes: Getting Started
Getting started with Kubernetes might seem a bit daunting at first. Fortunately, this collection of guides, tips, and advice will help out.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Getting Started with Kubernetes sounds like quite a daunting feat. How do you get started with “an open-source system for automating deployment, scaling, and management of containerized applications”? Let’s examine Kubernetes’ beginning.
Containers have been in use for a very long time in the Unix world. Linux containers are popular thanks to projects like Docker.
Google created Process Containers in 2006 and later realized they needed a way to maintain all these containers. Borg was born as an internal Google project and many tools sprang from its users. Omega was then built iterating on Borg. Omega maintained cluster state separate from the cluster members, thus breaking Borg’s monolith. Finally, Kubernetes sprung from Google. Kubernetes is now maintained by Cloud Native Computing Foundation’s members and contributors.
If you want an “Explain Like I’m Five” guide to what Kubernetes is and some of its primitives, take a look at “The Children’s Illustrated Guide to Kubernetes.” The Guide (PDF) features a cute little giraffe that represents a tiny PHP app that is looking for a home. Core Kubernetes primitives like pods, replication controllers, services, volumes, and namespaces are covered in the guide. It’s a good way to wrap your mind around the why and how of Kubernetes. Fair warning though, it does not cover Kubernetes networking components.
Let’s break down the two areas you can get started with Kubernetes. The first area is maintaining or operating the Kubernetes cluster itself. The second area is deploying and maintaining applications running in a Kubernetes cluster. The distinction here is to provide compartmentalization when learning Kubernetes. To be proficient at Kubernetes, you should know both, but you can get started knowing one area or the other.
To learn how the internals of Kubernetes works, I would recommend Kelsey Hightower’s “Kubernetes The Hard Way”. It is a hands-on series of labs to bringing up Kubernetes with zero automation. If you want to know how to stand up all the pieces that make a full Kubernetes cluster, then this is the path for you.
If you want to get started with deploying containerized apps to Kubernetes, then minikube is the way to go. minikube is a tool that helps you deploy Kubernetes locally. You have to be able to run a hypervisor on your host, but most modern devices can. Each OS is different with regards to setting up everything for minikube, but you can run minikube on Linux, macOS, or Windows so the sky’s the limit. Deploying Docker (or rkt) containers to minikube is easy. It’s the things that make your container more resilient in a Kubernetes cluster
After kicking the tires on minikube, if you feel like it is missing a few components, then I would recommend minishift or CoreOS Tectonic. minishift is the minikube of Red Hat OpenShift. OpenShift has a fantastic UI and many features that make Kubernetes a little better. CoreOS Tectonic is a more opinionated, enterprise-ready Kubernetes. Luckily, CoreOS Tectonic has a free sandbox version. The nice thing about CoreOS Tectonic is the networking and monitoring that come baked into this iteration of Kubernetes. CoreOS has been very thoughtful about the decisions made in Tectonic and it shows.
Regardless of how you get started learning Kubernetes, now is the time to start. There are so many places to deploy Kubernetes now that it doesn’t make sense to not kick the tires before determining if it is a great fit for your use cases. Before you deploy to AWS, Google Cloud, or Azure, make sure you’re not wasting your time.
Published at DZone with permission of Chris Short . See the original article here.
Opinions expressed by DZone contributors are their own.