DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale
  • Optimizing Prometheus Queries With PromQL
  • Demystifying Kubernetes in 5 Minutes

Trending

  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Streamlining Event Data in Event-Driven Ansible
  • Mastering Fluent Bit: Installing and Configuring Fluent Bit on Kubernetes (Part 3)
  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  1. DZone
  2. Software Design and Architecture
  3. Containers
  4. A Simplified Guide to Deploying Kubernetes Clusters

A Simplified Guide to Deploying Kubernetes Clusters

We will discuss the steps for deploying a Kubernetes cluster, delving into the complexities involved, and offering troubleshooting tips to address common issues.

By 
Abhishek Gupta user avatar
Abhishek Gupta
·
Nov. 26, 24 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
3.0K Views

Join the DZone community and get the full member experience.

Join For Free

Kubernetes has become the de facto standard for container orchestration, providing robust tools for deploying, scaling, and managing containerized applications. While it offers a powerful platform for managing applications across multiple nodes, the initial setup can be daunting. Bringing up a multi-node cluster introduces additional layers of configuration that must work together in harmony.

The procedures and options for deploying a Kubernetes multi-node cluster cover networking complexities, resource allocation, security configurations, and operational overheads. Depending on your infrastructure (whether on-premises, cloud, or hybrid) and your case-specific requirements, several common deployment methods exist, each with its own advantages and trade-offs. We’ll explore these to help you choose the best approach for your environment.

Using Managed Kubernetes Services (Easiest)

Managed Kubernetes services handle much of the setup and maintenance work for you, making them ideal if you don't require deep customization or prefer not to manage the cluster manually. These services typically offer benefits like auto-scaling, automated updates, and built-in cloud-native security. However, it’s important to consider potential downsides, like higher costs associated with managed services and the risk of vendor lock-in, which might limit your flexibility in the long term.

Popular managed services include:

  • Google Kubernetes Engine (GKE) (Google Cloud)
  • Amazon Elastic Kubernetes Service (EKS) (AWS)
  • Azure Kubernetes Service (AKS) (Azure)

Use kubectl to manage the cluster. For GKE, EKS, and AKS, you’ll download credentials with respective cloud CLI tools to connect kubectl to your cluster.

Each service comes with its own ecosystem of cloud-native integrations, making it easier to deploy and scale. GKE is known for integrating with Google’s AI/ML tools, while EKS offers deep integration with the broader AWS ecosystem, including security and monitoring tools like IAM and CloudWatch. Keep this in mind when choosing.

Using kubeadm (Self-Managed Cluster)

If you prefer more control over the infrastructure and want to deploy a self-managed Kubernetes cluster on your own machines, kubeadm is a popular choice. However, it requires a higher level of expertise and commitment to maintenance, as you'll need to handle tasks like network setup, security configuration, and upgrades yourself.

Prerequisites

  • Minimum of 2 nodes (1 control plane, 1 worker).
  • Linux installed (Ubuntu, CentOS, etc.).
  • Docker or another container runtime installed.
  • kubeadm, kubelet, and kubectl installed.

Steps

1. Prepare the Machines:

  • Install Docker on all machines.
  • Install kubeadm, kubelet, and kubectl on all machines.
  • Disable swap (Kubernetes doesn’t work with swap enabled).
  • Set up required networking ports and firewall rules.

2. Initialize the Control Plane Node: On the Master Node, run:

  • sudo kubeadm init --pod-network-cidr=10.244.0.0/16

 Save the output, especially the command that lets worker nodes join the cluster.

3. Set Up kubectl on the Control Plane:

Shell
 
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


4. Install a Pod Network Add-On: For example, to install Flannel:

  • kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

5. Join Worker Nodes: On each worker node, use the join command provided during the kubeadm init step:

  • sudo kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

6. Verify Cluster Setup: On the Master Node:

  • kubectl get nodes

Using Minikube (Local Development)

For local development, Minikube offers a lightweight option to run Kubernetes on a local machine. Minikube is an excellent choice for testing and developing environments where you don't need the full scale of a multi-node production cluster.

On Control Plane Node

1. Start Minikube on the Control Plane Node:

  • minikube start --nodes 1 --cpus 4 --memory 8192 --driver=docker

In this command:

  • --nodes 1: Specifies the number of nodes to start with (initially 1 control plane node).
  • --cpus 4:  Allocates 4 CPUs to the Minikube VM.
  • --memory 8192: Allocates 8GB of memory to the Minikube VM.
  • --driver=docker: Specifies the driver to use for running Minikube (e.g., Docker).

2. Verify the cluster:

  • kubectl get nodes

On Worker node

Add additional nodes to the cluster. You can add as many nodes as you need.

  • minikube node add --cpus 2 --memory 4096 –worker

Repeat this command as needed to add more worker nodes. For example, to add two more worker nodes:

Shell
 
minikube node add --cpus 2 --memory 4096 --worker

minikube node add --cpus 2 --memory 4096 –worker


Once you've added the nodes, you can verify the nodes in your cluster using kubectl.

  • kubectl get nodes

You should see output similar to the following, showing multiple nodes:

Shell
 
NAME           STATUS   ROLES    AGE   VERSION

minikube       Ready    master   5m    v1.21.0

minikube-m02   Ready    <none>   2m    v1.21.0

minikube-m03   Ready    <none>   1m    v1.21.0


Using K3s

K3s is a lightweight Kubernetes distribution designed for resource-constrained environments, such as small servers, IoT devices, or edge computing. Developed by Rancher Labs, K3s simplifies the Kubernetes setup while reducing its resource footprint.

Single-Node Installation

For a single-node setup, the installation process is straightforward:

  • curl -sfL https://get.k3s.io | sh –

This command downloads and installs K3s, setting up a single-node Kubernetes cluster. After installation, K3s runs as a systemd service and creates a kubeconfig file at /etc/rancher/k3s/k3s.yaml.

Multi-Node Installation

For a multi-node setup, designate one node as the server (master) and the rest as agents (workers).

On the server node, run:

  • curl -sfL https://get.k3s.io | sh –

Retrieve Node Token

Obtain the token from the server node, which will be used by agent nodes to join the cluster:

  • cat /var/lib/rancher/k3s/server/node-token

Install K3s Agent

On each agent node, run the following command, replacing <SERVER_IP> with the IP address of the server node and <NODE_TOKEN> with the token retrieved in the previous step:

  • curl -sfL https://get.k3s.io | K3S_URL=https://<SERVER_IP>:6443 K3S_TOKEN=<NODE_TOKEN> sh –

Accessing the Cluster

To interact with the K3s cluster, copy the kubeconfig file from the server node to your local machine:

Shell
 
scp user@<SERVER_IP>:/etc/rancher/k3s/k3s.yaml ~/.kube/config

Set the KUBECONFIG environment variable:

export KUBECONFIG=~/.kube/config

Deploying Applications

With the cluster up and running, you can deploy applications using kubectl. For example, to deploy an Nginx web server:

Shell
 
kubectl create deployment nginx --image=nginx

kubectl expose deployment nginx --port=80 --type=NodePort

Troubleshooting issues

During the bring-up phase for each of these methods, various issues can arise, ranging from network misconfigurations to resource limitations. Let’s discuss a systematic approach to troubleshooting the common issues, ensuring a smooth and successful cluster setup.

Network Problems

Issue

Pods cannot communicate with each other or with external services.

Troubleshooting Steps

Check CNI Plugin: Ensure that the Container Network Interface (CNI) plugin (e.g., Flannel, Calico) is correctly installed and running. Check the status of the CNI plugin pods:

  • kubectl get pods -n kube-system

Network Policies: Verify that network policies are not inadvertently blocking traffic. Review and adjust network policies as needed.

Node IP Configuration: Ensure the nodes have correct IP configurations and can reach each other. Use the ping command to test connectivity between nodes.

Node Connectivity Issues

Issue

Worker nodes cannot join the cluster or become unresponsive.

Troubleshooting Steps

Check Node Token: Ensure the correct node token is being used when joining worker nodes to the cluster. Verify the token on the server node:

  • cat /var/lib/rancher/k3s/server/node-token

Firewall Rules: Ensure that firewall rules allow traffic on the necessary ports (e.g., 6443 for the API server). Update firewall settings if needed.

Node Log: Check the logs on the worker nodes for any errors related to joining the cluster:

Shell
 
sudo journalctl -u kubelet

sudo journalctl -u k3s-agent (for K3s)


Resource Constraints

Issue

Pods are not scheduling due to insufficient resources.

Troubleshooting Steps

Resource Requests and Limits: Ensure that pods have appropriate resource requests and limits defined. If not, they may fail to schedule due to resource constraints.

Node Resources: Verify that nodes have sufficient CPU and memory resources available. Use the following command to check node resources: 

  • kubectl describe nodes

Cluster Autoscaler: If using a cluster autoscaler, ensure it is correctly configured to add or remove nodes based on resource demands.

Configuration Errors

Issue

Misconfigurations in manifests or deployment scripts cause failures.

Troubleshooting Steps

Validate Manifests: Use kubectl apply --dry-run to validate YAML manifests before applying them to the cluster:

  • kubectl apply -f <manifest-file> --dry-run=client

Check Logs: Review the logs of the Kubernetes components for configuration-related errors:

Shell
 
sudo journalctl -u kubelet

sudo journalctl -u k3s (for K3s)


Config Files: Verify that configuration files (e.g., kubeconfig, deployment scripts) are correctly formatted and contain valid values.

Use kubectl exec to access a shell inside a running pod and diagnose issues from within the container:

  • kubectl exec -it <pod-name> -- /bin/sh

Summary

We’ve explored several methods for deploying Kubernetes clusters, each suited to different needs and environments:

  • For cloud-based clusters, managed services like GKE, EKS, and AKS are the easiest.
  • For on-prem or self-managed clusters, kubeadm offers flexibility and control.
  • For development and testing, Minikube or k3s are excellent lightweight options.

By following these guidelines, you can deploy a Kubernetes cluster suited to your specific infrastructure and start taking advantage of its powerful orchestration capabilities.

References:

  • Marco Luksa's Kubernetes in Action
  • Nebrass Lamouchi's Getting Started with Kubernetes in Pro Java Microservices with Quarkus and Kubernetes: A Hands-on Guide
  • The Linux Foundation's Kubeadm
  • The Linux Foundation's Minikube 
  • Rancher Lab's K3s Documentation
Kubernetes Managed services cluster

Opinions expressed by DZone contributors are their own.

Related

  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale
  • Optimizing Prometheus Queries With PromQL
  • Demystifying Kubernetes in 5 Minutes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!