refcard cover
Refcard #355

Getting Started With Rancher

What is Rancher? And how does it make Kubernetes crazy easy? Rancher is a complete Kubernetes stack that's easy to navigate — whether it's physical servers on-prem or in the cloud. This Refcard helps you get started with Rancher — from zero to fully production-ready.

Published: Jul. 15, 2021
Free PDF for Easy Reference

Brought to You By

refcard cover

Written By

Section 1

What Is Rancher?

Rancher is primarily a management and organization platform for Kubernetes clusters at scale. Rancher not only can deploy Enterprise Kubernetes on-prem using physical hardware or VMware's vSphere but also orchestrate any certified Kubernetes clusters, including Amazon's EKS, Google's GKE, Microsoft's AKS, etc., along with providing a unified platform. Whether it's a Raspberry Pi cluster sitting on your desk or an RKE cluster running on physical servers in your data center, or even a complete PaaS solution in AWS. 

Section 2

Getting Started

he Rancher server is built on Kubernetes and runs as an application on any certified Kubernetes cluster, and, of course, Rancher is 100% open source with no license keys. Providing the primary controller for managing downstream clusters, the Rancher server also provides access to your downstream clusters in a standardized web UI and API. Rancher is primarily deployed on two types of clusters, RKE and K3s. RKE is mainly used in more traditional data centers and cloud deployments, and K3s are primarily used in more edge and developer laptop deployments. 

RKE (Rancher Kubernetes Engine) 

RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks. As long as you can run a supported Docker version, you can deploy and run Kubernetes with RKE.  

K3s (5 less than k8s

 K3s is a lightweight certified Kubernetes distribution. All duplicate, redundant, and legacy code is removed and baked into a single binary that is less than 40MB and contains everything needed to run a Kubernetes cluster. This includes etcdtraefik, and all Kubernetes components. It is designed to run resource-constrained, remote locations, or inside IoT appliances. K3s have also been built to support ARM64 and ARMv7 nodes fully, so they can even be ran on a Raspberry Pi. 

Creating a RKE Cluster 


Three Linux nodes with the following minimum specs: 

  • 2 vCPUs 
  • 8GB of RAM 
  • 20GB of SSD storage 

Installing Docker 

 You can either follow the Docker installation instructions or use Rancher's install scripts to install Docker. 


curl https://releases.rancher.com/install-docker/20.10.sh |sudo bash. 


Installing the RKE Binary

From your workstation or management server, download the current latest RKE release. 


cd /tmp 

wget https://github.com/rancher/rke/releases/download/v1.2.8/rke_linux-amd64  

chmod +x rke_linux-amd64 

sudo mv rke_linux-amd64 /usr/local/bin 

Installing the Kubectl Binary  

From your workstation or management server, download the current latest kubectl release. 


cd /tmp 

curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl 

chmod +x kubectl 

sudo mv kubectl /usr/local/bin/kubectl 

Creating the Cluster Config Configuration

RKE uses a cluster.yml file to define the nodes in the cluster and what roles each node should have. With three different roles that a node can have, the first is the etcd plane, the database for Kubernetes, and this role should be deployed in a HA configuration with an odd number of nodes and the default size of three nodes.  

 A five-member etcd cluster is the largest recommended size due to write performance suffering at scale. The second role being the control plane, which hosts the Kubernetes controllers and other related management services, should be deployed in a HA configuration with a minimum of two nodes.  

Note: The control plane doesn't scale horizontally very well and scales more vertically.   

The final role is the worker plane, which hosts your applications and related services. Nodes can support multiple roles, and in the default Rancher configuration, we'll be building a three-node cluster with all nodes running all roles. 

Example cluster.yml file: 


Description automatically generated 

For more examples, check out the Rancher documentation 

Creating the Cluster 

After creating the cluster.yml, we need to run the command rke to build the cluster using the following steps: 

  1. Create an SSH tunnel to each node for Docker CLI access. 
  2. Generate SSL certificates for all the different Kubernetes components. 
  3. Create the etcd plane and config all the etcd-related services. 
  4. Create the control plane, which includes kube-apiserverkube-controller-manager, and kube-scheduler. 
  5. Create the worker plane and join all the nodes to the cluster. 

Once these steps are done, RKE will create the file cluster.rkestate; this file contains credentials and the current state of the cluster. RKE will also create the file kube_config_cluster.yml; this file is used by kubectl to access the cluster. To make access more manageable, we'll want to copy this file to kubectl's config directory.  


mkdir -p ~/.kube/ 

cp kube_config_cluster.yml ~/.kube/config 

Verify access: 

kubectl get nodes 

Creating a K3s Cluster in Single-Node Mode


One Linux node with the following minimum specs: 

  • 2 vCPUs 
  • 4GB of RAM 
  • 10GB of SSD storage 

Installing K3s 

While SSH into the K3s node, we'll run the following commands: 

sudo  su - 

curl -sfL https://get.k3s.io | sh – 

Verify access: 

k3s kubectl get node 

Installing Rancher on a RKE or K3s Cluster


  • Kubectl access to the cluster 
  • Helm installed on the workstation or management server 

Note: For K3s clusters, update the command "kubectl" to "k3s kubectl". 

Installing the Helm Binary

From your workstation or management server, download the latest helm release. 


sudo su – 

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash 

Configuring Helm 

 Using the command helm repo add, we'll add the Rancher charts to helm: 

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest 

helm repo add jetstack https://charts.jetstack.io 

Installing Cert-Manager  

Cert-manager will manage the SSL certificates for Rancher: 

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml 

kubectl create namespace cert-manager 

helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4 

Please see the cert-manager's documentation for more details. 

Installing Rancher 

We're now going to install Rancher using the default settings and following commands:  

kubectl create namespace cattle-system 

helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.example.com 


Configuring DNS for a Single Node 

In single-node mode, DNS is optional, and the node IP/Hostname can be used in place of the Rancher URL. 

Configuring the Front-End Load Balancer for HA 

 To provide a HA setup for Rancher, we'll want to create a Layer-4 (TCP mode) or Layer-7 (HTTP mode) load balancer for ports 80 and 443 sitting in front of and forwards traffic to all nodes in the cluster. The DNS record for the Rancher URL should be pointed at the load balancer. 

For more details, please see Rancher's documentation.  

Building a Downstream Cluster 

Downstream clusters in Rancher are RKE/RKE2/K3s clusters that Rancher manages for you. They can also be clusters that are built outside Rancher then imported. In this example, we'll be making a standard three-node with all nodes running all roles. 


 Three Linux nodes with the following minimum specs: 

  • 2 vCPUs 
  • 4GB of RAM 
  • 20GB of SSD storage 

Installing Docker on All Nodes

You can either follow these Docker installation instructions or use Rancher's install scripts         . 


 curl https://releases.rancher.com/install-docker/20.10.sh |sudo bash 

Creating the Cluster in the Rancher UI 

  1. From the Clusters page, click Add Cluster. 
  2. Choose Custom. 
  3. Enter a Cluster Name
    Note: This can be changed at a later date. 
  4. Click Next. 
  5. From Node Role, choose the roles that you want to be filled by a cluster node. You must provision at least one node for each role: etcdworker, and control plane. In this example, we'll select all three roles. 
  6. Copy the command displayed on-screen to your clipboard. 

Adding the Nodes to the Cluster

We'll want to run the previous command on each node. Then once all three nodes have joined successfully, the cluster should be in an active state. 

Section 3


Etcd Backups

Snapshots of the etcd database can be taken and saved locally or to S3. Etcd backups are used to back up the state of the Kubernetes cluster. This backup includes all the deploymentssecrets, and configmaps for the cluster.  

Note: This does not have backups for any application volumes being used in the cluster. You'll need a third-party tool to back up your application data. 

Configuring Local Etcd Backups 

  1. From the Clusters page, click Edit. 
  2. Fill in the "etcd Snapshot Backup Target" section. 
  3. Click Save. 

Configuring S3 Etcd Backups 

  1. From the Clusters page, click Edit. 
  2. Fill in the "etcd Snapshot Backup Target" section. 
  3. Click Save. 

Monitoring and Alerting

Rancher is powered by PrometheusGrafanaAlertmanager, the Prometheus Operator, and the Prometheus adapter. 

This monitoring stack allows you to: 

  • Monitor the state of your cluster, node, and Kubernetes components. 
  • Create custom dashboards to make it easy to visualize collected metrics via Grafana 
  • Configure alert-based notifications via Email, Slack, PagerDuty, etc. using Alertmanager. 

Installing Monitoring   

  1. From the Cluster Explorer page, select Apps & Marketplace. 
  2. Select Monitoring from the catalog. 
  3. Click Install. 

Access Grafana 

  1. From the Cluster Explorer page, select Monitoring. 


Installing OPA Gatekeeper 

  1. From the Cluster Explorer page, select Apps & Marketplace. 
  2. Select OPA Gatekeeper from the catalog. 
  3. Click Install. 

Configuring Constraints

 OPA Gatekeeper constraints are a set of policies that allow or deny particular behavior in a Kubernetes cluster. Below are some example policies that I usually recommend applying: 

Hardening a Cluster

By default, Kubernetes can be vulnerable to numerous security issues, including privilege escalation, allowing users to gain root access to the Kubernetes host servers. To address this issue, Rancher created a guide with a number of setting changes to lock down a cluster.  

Configuration Steps 

Check out these instructions for hardening a production installation of a RKE cluster with Rancher. 

Installing CIS Benchmark

  1. From the Cluster Explorer page, select Apps & Marketplace. 
  2. Select CIS Benchmark from the catalog. 
  3. Click Install. 

Configuring the CIS Scans

 To verify the cluster hardening was applied correctly and hasn't changed, we configure a scheduled scan using this guide. 

Operational Backups 

By default, Rancher clusters have a scheduled backup job that takes an etcd backup every 12 hours. But this is only backing up the etcd database and not backing up any volume data. It's also designed to restore a whole cluster without restoring individual objects and rolling the whole cluster back. This is where a third-party tool can be used to take volume and object-level backups. 

For more details on the Rancher etcd backup, please see this documentation. 

Installation Steps

 To install a third-party data protection tool, like TrilioVault for example, on a Rancher cluster, we'll want to follow the official tool install guide.  

See example below. 


Restore Steps 

We'll then want to follow the example application to deploy a WordPress site with a MySQL database with an attached volume. See here.  

Then, to kick off a restore, we'll need to create a restore job that can be on the same cluster or restored on a different cluster (Great of a DR plan) following these steps 

Section 4


This getting started with Rancher Refcard provides a step-by-step guide for installing Rancher, addressing standard Day-2 tasks and making your Kubernetes cluster production-ready. 

Section 5

Additional Documentation and Guides

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}