Build a Kubernetes Lab With Infrastructor
Learn here how you can create a Kubernetes cluster on virtual machines with Infrastructor.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Infrastructor is an open-source server provisioning tool written in Groovy. It provides a powerful DSL to simplify configuration management of bare-metal servers and virtual machines. In this article, I'm going to show you how to set up a small Kubernetes cluster on top of virtual machines, which can be handy for its learning.
We are going to build a Kubernetes cluster on top of 3 VMs. We will create the VMs with Vagrant and VirtualBox: 1 master node (control plane), 2 worker nodes.
The number of the worker nodes can be easily increased and depends on your need and the capacity of your hardware. We will also use Calico 3.4 network plugin.
Prerequisites
Please make sure the following software is installed on your machine:
1. Java 8 - required to run Infrastructor
2. Infrastructor 0.2.2 (can be downloaded from GitHub releases page or installed with SDKMAN)
3. Vagrant 2.2.x
4. VirtualBox 5.2.x
The source code of the provisioning scripts can be found in the GitHub repository.
Setting Up VMs
Let's describe VM configurations in a Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.define "master" do |node|
node.vm.network "private_network", ip: "192.168.65.10"
node.vm.provider :virtualbox do |vb|
vb.memory = 4096
end
end
(1..2).each do |i|
config.vm.define "node-#{i}" do |node|
ip = "192.168.65.1#{i}"
node.vm.network "private_network", ip: ip
node.vm.provider :virtualbox do |vb|
vb.memory = 2048
end
end
end
end
The master node has 4GB of RAM and the worker nodes have 2GB each. Feel free to adjust these settings if needed. Launch the nodes with the following command:
vagrant up
After the VMs are up and running we can run a provisioning script to configure a Kubernetes cluster.
Provision a Kubernetes Cluster
Let's describe a setup procedure of a Kkubernetes cluster with Infrastructor. We will define everything in a single file: provision.groovy. The provisioning script will have 5 main tasks:
1. Describe inventory: define all nodes and roles. Infrastructor will connect to the nodes using SSH.
def keypath(def machine) { ".vagrant/machines/$machine/virtualbox/private_key" }
def inventory = inlineInventory {
node id: 'master', host: '192.168.65.10', username: 'vagrant', keyfile: keypath('master'), tags: [role: 'master']
(1..2).each {
node id: "node-$it", host: "192.168.65.1$it", username: 'vagrant', keyfile: keypath("node-$it")
}
}
2. Initialize hosts. Update hostnames to corresponding node IDs and turn off swap.
task name: "initializing host", parallel: ON_ALL_NODES, actions: {
file {
user = 'root'
target = "/etc/hostname"
content = node.id
}
insertBlock {
user = 'root'
target = '/etc/hosts'
block = "127.0.0.1\t${node.id}\n"
position = START
}
shell "sudo hostname ${node.id}"
shell "sudo swapoff -a"
}
3. Install kublet, kubeadm, kubectl and kubernetes-cni packages using apt and update the Kubernetes node IP to point to the correct network device (the local network instead of default NAT):
task name: "installing kubernetes", parallel: ON_ALL_NODES, actions: { node ->
shell user: 'root', command: '''
apt update
apt install docker.io -y
usermod -aG docker vagrant
'''
file {
user = 'root'
target = '/etc/apt/sources.list.d/kubernetes.list'
content = 'deb http://apt.kubernetes.io/ kubernetes-xenial main'
}
shell user: 'root', command: '''
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt update
apt install kubelet kubeadm kubectl kubernetes-cni -y
'''
file user: 'root', target: '/etc/default/kubelet', content: "KUBELET_EXTRA_ARGS=--node-ip=$node.host"
}
4. Initialize Kubernetes master node using kubeadm init
and install Calico CNI plugin.
task name: "initializing kubernetes master", filter: {'role:master'}, actions: { node ->
shell """
sudo kubeadm init --token $TOKEN \
--apiserver-advertise-address $node.host \
--apiserver-bind-port $API_SERVER_BIND_PORT \
--pod-network-cidr $POD_NETWORK_CIDR
"""
shell '''
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
'''
shell '''
kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
'''
}
5. Join worker nodes to the cluster using kubeadm join
:
task name: "initializing worker nodes", filter: {!'role:master'}, parallel: ON_ALL_NODES, actions: {
shell "sudo kubeadm join $MASTER_HOST:$API_SERVER_BIND_PORT --token $TOKEN --discovery-token-unsafe-skip-ca-verification"
}
Now the provision.groovy
contains all the steps to set up a Kubernetes cluster:
def keypath(def machine) { ".vagrant/machines/$machine/virtualbox/private_key" }
def inventory = inlineInventory {
node id: 'master', host: '192.168.65.10', username: 'vagrant', keyfile: keypath('master'), tags: [role: 'master']
(1..2).each {
node id: "node-$it", host: "192.168.65.1$it", username: 'vagrant', keyfile: keypath("node-$it")
}
}
inventory.provision { nodes ->
def TOKEN = "change.me00000000000000"
def POD_NETWORK_CIDR = "10.50.0.0/16"
def API_SERVER_BIND_PORT = 8080
def MASTER_HOST = nodes['master'].host
def ON_ALL_NODES = nodes.size()
task name: "initializing host", parallel: ON_ALL_NODES, actions: {
file {
user = 'root'
target = "/etc/hostname"
content = node.id
}
insertBlock {
user = 'root'
target = '/etc/hosts'
block = "127.0.0.1\t${node.id}\n"
position = START
}
shell "sudo hostname ${node.id}"
shell "sudo swapoff -a"
}
task name: "installing kubernetes", parallel: ON_ALL_NODES, actions: { node ->
shell user: 'root', command: '''
apt update
apt install docker.io -y
usermod -aG docker vagrant
'''
file {
user = 'root'
target = '/etc/apt/sources.list.d/kubernetes.list'
content = 'deb http://apt.kubernetes.io/ kubernetes-xenial main'
}
shell user: 'root', command: '''
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt update
apt install kubelet kubeadm kubectl kubernetes-cni -y
'''
file user: 'root', target: '/etc/default/kubelet', content: "KUBELET_EXTRA_ARGS=--node-ip=$node.host"
}
task name: "initializing kubernetes master", filter: {'role:master'}, actions: { node ->
shell """
sudo kubeadm init --token $TOKEN \
--apiserver-advertise-address $node.host \
--apiserver-bind-port $API_SERVER_BIND_PORT \
--pod-network-cidr $POD_NETWORK_CIDR
"""
shell '''
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
'''
shell '''
kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
'''
}
task name: "initializing worker nodes", filter: {!'role:master'}, parallel: ON_ALL_NODES, actions: {
shell "sudo kubeadm join $MASTER_HOST:$API_SERVER_BIND_PORT --token $TOKEN --discovery-token-unsafe-skip-ca-verification"
}
}
You can run it using the infrastructor CLI:
$ infrastructor run -f provision.groovy
The output should look like this:
$ infrastructor run -f provision.groovy
[INFO] running file: 'provision.groovy'
[INFO] task: 'initializing host'
[INFO] task: 'initializing host', done on 3 node|s
[INFO] task: 'installing kubernetes'
[INFO] task: 'installing kubernetes', done on 3 node|s
[INFO] task: 'initializing kubernetes master'
[INFO] task: 'initializing kubernetes master', done on 1 node|s
[INFO] task: 'initializing worker nodes'
[INFO] task: 'initializing worker nodes', done on 2 node|s
[INFO] file: 'provision.groovy' is done
EXECUTION COMPLETE in 2 minutes, 22.393 seconds
Then you can ssh to the master node and check the status of the cluster:
$ vagrant ssh master
$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-hs5wb 1/1 Running 0 11m
kube-system calico-kube-controllers-5d94b577bb-djj64 1/1 Running 0 11m
kube-system calico-node-fc6gb 1/1 Running 0 11m
kube-system calico-node-rsxgs 1/1 Running 0 11m
kube-system calico-node-wrbfk 1/1 Running 0 11m
kube-system coredns-86c58d9df4-cmt9d 1/1 Running 0 11m
kube-system coredns-86c58d9df4-wlxjj 1/1 Running 0 11m
kube-system etcd-master 1/1 Running 0 10m
kube-system kube-apiserver-master 1/1 Running 0 10m
kube-system kube-controller-manager-master 1/1 Running 0 10m
kube-system kube-proxy-6v2pm 1/1 Running 0 11m
kube-system kube-proxy-9t4vh 1/1 Running 0 11m
kube-system kube-proxy-ckh4j 1/1 Running 0 11m
kube-system kube-scheduler-master 1/1 Running 0 10m
All services should be up and running. Your Kubernetes cluster is ready to host services. You can use the kubectl
command right from the master node to manage it. You can also modify Vagrantfile and provision.groovy files to add more worker nodes. Enjoy your Kubernetes learning!
Opinions expressed by DZone contributors are their own.
Comments