DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • From Docker Swarm to Kubernetes: Transitioning and Scaling
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Request Routing Through Service Mesh for WebSphere Liberty Profile Container on Kubernetes
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies

Trending

  • Why High-Performance AI/ML Is Essential in Modern Cybersecurity
  • Top Book Picks for Site Reliability Engineers
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  • Building Enterprise-Ready Landing Zones: Beyond the Initial Setup
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. How to Create a Kubernetes Cluster and Load Balancer for Local Development

How to Create a Kubernetes Cluster and Load Balancer for Local Development

This guide will show you one of many ways that you can set up and tear down a local Kubernetes cluster with a load balancer for use as a local development environment.

By 
Ken Lee user avatar
Ken Lee
·
Jul. 16, 21 · Tutorial
Likes (7)
Comment
Save
Tweet
Share
16.7K Views

Join the DZone community and get the full member experience.

Join For Free


Overview

This guide will show you one of many ways that you can set up and tear down a local Kubernetes cluster with a load balancer for use as a local development environment.

In this article, we will be leveraging Rancher's k3d to run a local Kubernetes cluster and installing MetalLB as a Load Balancer to our cluster. Some of the use cases and reasons for setting up an environment like this (but not limited to):

  • You don't want to incur the cost of working with a Kubernetes cluster in the cloud
  • Fast setup and teardown
  • Full unrestricted access to your own Kubernetes cluster
  • Rapid prototyping
  • Any other reasons you can think of (which probably also suffice too as well)

Prerequisites

  • Docker 
  • k3d (v4.4.6)
  • jq
  • kubectl
  • lens (optional)

Setup

Create the Cluster and validate it's creation:

Shell
 
# create the k3d cluster
k3d cluster create local-k8s --servers 1 --agents 3 --k3s-server-arg --no-deploy=traefik --wait

# set kubeconfig to access the k8s context
export KUBECONFIG=$(k3d kubeconfig write local-k8s)

# validate the cluster master and worker nodes
kubectl get nodes

Determine your Load Balancer's ingress range by obtaining it's cidr block. This range will depend on the Docker network that your k3d cluster leverages. The script below will help determine and prescribe a suggested range.

Shell
 
# determine loadbalancer ingress range
cidr_block=$(docker network inspect k3d-local-k8s | jq '.[0].IPAM.Config[0].Subnet' | tr -d '"')
cidr_base_addr=${cidr_block%???}
ingress_first_addr=$(echo $cidr_base_addr | awk -F'.' '{print $1,$2,255,0}' OFS='.')
ingress_last_addr=$(echo $cidr_base_addr | awk -F'.' '{print $1,$2,255,255}' OFS='.')
ingress_range=$ingress_first_addr-$ingress_last_addr

Deploy the Load Balancer, which leverages MetalLB:

Shell
 
# deploy metallb 
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

# configure metallb ingress address range
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - $ingress_range
EOF

Validation

Create an Nginx test deployment and expose via a Load Balancer. If the Load Balancer is working correctly, an external ip address should be assigned by Metallb.

Shell
 
# create a deployment (i.e. nginx)
kubectl create deployment nginx --image=nginx

# expose the deployments using a LoadBalancer
kubectl expose deployment nginx --port=80 --type=LoadBalancer

# obtain the ingress external ip
external_ip=$(kubectl get svc nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

# test the loadbalancer external ip
curl $external_ip

Expected output:

Shell
 
# expected output: 

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Teardown

Destroy the cluster

Shell
 
k3d cluster delete local-k8s

Conclusion

This post has gone over one of many ways to deploy a Kubernetes cluster to a local development environment. You can find an accompanying Github repository here for reference source

Docker (software) Load balancing (computing) Kubernetes

Opinions expressed by DZone contributors are their own.

Related

  • From Docker Swarm to Kubernetes: Transitioning and Scaling
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Request Routing Through Service Mesh for WebSphere Liberty Profile Container on Kubernetes
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!