DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • GitOps: Flux vs Argo CD
  • Main Features and Benefits of Google Kubernetes Engine
  • Watching the Requests Go By: Reconstructing an API Spec with APIClarity
  • Dive Deep Into Resource Requests and Limits in Kubernetes

Trending

  • ITBench, Part 1: Next-Gen Benchmarking for IT Automation Evaluation
  • Driving DevOps With Smart, Scalable Testing
  • How Kubernetes Cluster Sizing Affects Performance and Cost Efficiency in Cloud Deployments
  • Data Lake vs. Warehouse vs. Lakehouse vs. Mart: Choosing the Right Architecture for Your Business
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. The Open Source Way to Rightsize Kubernetes With One Click

The Open Source Way to Rightsize Kubernetes With One Click

Overprovisioned Kubernetes workloads are a growing concern for developer teams, particularly as budget efficiency becomes more important. This article walks through an open source way to avoid overprovisioning Kubernetes workloads with a single click.

By 
Saiyam Pathak user avatar
Saiyam Pathak
·
Jul. 23, 22 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
6.8K Views

Join the DZone community and get the full member experience.

Join For Free

Rightsizing resource requests is an increasing challenge for teams using Kubernetes—and especially critical as they scale their environments. Overprovisioning CPU and memory lead to costly overspending, but underprovisioning risks CPU throttling and out-of-memory errors if requested resources aren’t sufficient. Dev and engineering teams that don’t thoroughly understand the live performance profile of their containers will usually play it safe and request vastly more CPU and memory resources than required, often with significant budget waste.

The open source Kubecost tool (https://github.com/kubecost) has had a Request Sizing dashboard to help Kubernetes users bring more cost efficiency to their resource requests. One of the tool’s most popular optimization features, the dashboard identifies over-requested resources, offers recommendations for appropriate per-container resource requests, and estimates the cost-savings impact of implementing those recommendations. The dashboard utilizes actual usage data from live containers to provide accurate recommendations. However, leveraging the dashboard has included some hurdles, requiring users to manually update YAML requests to align resource requests with Kubecost recommendations or introduce integrations using a CD tool. 

The newly released Kubecost v1.93 eliminates those hurdles by introducing 1-Click Request Sizing. With this feature added to the open source tool, dev and engineering teams can click a button to apply container request right-sizing recommendations automatically.

The following step-by-step example introduces overprovisioned Kubernetes workloads and uses 1-Click Request Sizing to bring those requests to an optimized size. Before we begin, you’ll need a Kubernetes cluster to work with. While this example uses Civo Kubernetes, Kubecost request sizing is available for any Kubernetes environment.

To create an example cluster (if needed), use this to create civo Kubernetes cluster using Civo CLI:

Shell
 
civo k3s create request-sizing-demo --region LON1
The cluster request-sizing-demo (84c6c595-505e-4e35-8e38-61364a1a80bc) has been created


Now, let’s get started.

1) Install Kubecost and Enable Cluster Controller

If using a previous Kubecost installation, enable Cluster Controller using the helm value below.

Kubecost ensures a transparent permission model by keeping all cluster modification capabilities in the separate Cluster Controller component. 1-Click Request Sizing APIs reside in Cluster Controller since Kubernetes API write permission is required to edit container requests.

Here, we’ll install Kubecost and enable Cluster Controller:

Shell
 
helm repo add kubecost https://kubecost.github.io/cost-analyzer/
helm repo update
helm upgrade \
	-i \
	--create-namespace kubecost \
	kubecost/cost-analyzer \
	--namespace kubecost \
	--version "v1.94.0-rc.1" \
	--set clusterController.enabled=true


After waiting a few minutes for the containers to get up and running, check the Kubecost namespace:

Shell
 
→ kubectl get deployment -n kubecost
NAME                      		READY   UP-TO-DATE   AVAILABLE   AGE
kubecost-cluster-controller   	1/1	 	1        		1       	2m12s
kubecost-cost-analyzer    	1/1 		1        		1       	2m12s
kubecost-grafana          		1/1 		1        		1       	2m12s
kubecost-kube-state-metrics   	1/1 		1        		1       	2m12s
kubecost-prometheus-server	1/1 		1        		1       	2m12s


Here we see that Kubecost is installed and running correctly.

2) Make a Sample Overprovisioned Workload

We’ll purposefully create a workload that requests more resources than it needs, enabling 1-Click Request Sizing to come to the rescue. The following bash creates an “rsizing” namespace holding a 2-replica NGINX deployment, with considerable container resource requests:

Shell
 
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: rsizing
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: rsizing
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        resources:
          requests:
            cpu: 300m
            memory: 500Mi
EOF


We’ll check that this deployment is scheduled and running correctly:

Shell
 
→ kubectl get pod -n rsizing
NAME                           			READY   STATUS	RESTARTS   AGE
nginx-deployment-bd6c697bf-qxtvk    	1/1	   Running	0	      	10s
nginx-deployment-bd6c697bf-b2zml   	1/1 	   Running   	0	      	11s


Next, we’ll use a JSONPath expression to check in on the Pods running, and the requests of their containers:

Shell
 
→ kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{range .spec.containers[*]}{.name}{'\t'}{.resources.requests}{'\n'}{end}{'\n'}{end}"

nginx-deployment-bd6c697bf-qxtvk    nginx    {"cpu":"300m","memory":"500Mi"}

nginx-deployment-bd6c697bf-b2zml    nginx    {"cpu":"300m","memory":"500Mi"}


Just as we planned, the containers are making outsized resource requests. Next, we’ll fix those issues.

3) View Kubecost Recommendations and Put Them Into Action

Access Kubecost’s frontend with kubectl's port-forward:

kubectl port-forward -n kubecost service/kubecost-cost-analyzer 9090

Allow Kubecost a few minutes to collect usage profiling data and prepare its recommendations for request sizing. Then go to the request sizing recommendation page at http://localhost:9090/request-sizing.html?filters=namespace%3Arsizing. Note that this link includes a filter to show only recommendations for the “rsizing” namespace. With Cluster Controller enabled, the “Automatically implement recommendations” button will be available on this page as well:

The NGINX deployment isn’t getting any traffic, causing it to be severely overprovisioned. Kubecost recognized this fact and has suggested shifting to a 10m CPU request and 20MiB RAM request. Click the “Automatically implement recommendations” button and you’ll get this message:

These recommendations are filtered to the “rsizing” namespace, so clicking the Yes option will apply recommendations for this filtered set. 

Now check the status of the cluster:

Shell
 
→ k get pod -n rsizing          NAME    READY   STATUS    	RESTARTS    AGE
nginx-deployment-574cd8ff7f-5czgz   	1/1 	Running   	0      		16s
nginx-deployment-574cd8ff7f-srt8j   	1/1 	Running   	0      		9s
nginx-deployment-bd6c697bf-qxtvk	0/1 	Terminating   	0      		53m
nginx-deployment-bd6c697bf-b2zml	0/1 	Terminating   	0      		53m


After the old Pod versions have terminated, use the JSONPath expression again to check the new Pods:

Shell
 
→ kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{range .spec.containers[*]}{.name}{'\t'}{.resources.requests}{'\n'}{end}{'\n'}{end}"

nginx-deployment-574cd8ff7f-5czgz    nginx    {"cpu":"10m","memory":"20971520"}

nginx-deployment-574cd8ff7f-srt8j    nginx    {"cpu":"10m","memory":"20971520"}


Kubecost has successfully resized the container requests! And at both the Pod and NGINX deployment levels:

Shell
 
→ k get deploy -n rsizing nginx-deployment -o=jsonpath='{.spec.template.spec.containers[0].resources}' | jq
{
  "requests": {
	"cpu": "10m",
	"memory": "20971520"
  }
}

4) Remove the Demo Cluster

Don’t forget to clean up after this demonstration by removing the test cluster (avoiding any unnecessary costs):

→ civo k3s remove request-sizing-demo --region LON1

Discover More About Kubecost’s 1-Click Request Sizing

This example demonstrated how Kubernetes users can easily and automatically optimize their resource utilization with 1-Click Request Sizing from the open source Kubecost tool. To learn more, additional documentation is available here:

  • 1-click request sizing feature guide

  • 1-click request sizing API reference

  • Cluster Controller advanced setup and reference

  • Request sizing recommendation API reference

Kubernetes costs can easily spiral out of control at scale if not carefully monitored, and if unexpected cost centers or errors with the potential to spur on runaway expenses aren’t swiftly addressed and remediated. Teams using Kubernetes need the visibility to view the complete picture of their Kubernetes spending in real-time. That visibility must include the ability to zoom out to a holistic view that accounts for external cloud services and infrastructure costs, and to zoom in and assign costs to each specific deployment, service, and namespace. Teams then need the tools to take action and successfully pursue cost efficiency optimization across their Kubernetes implementations. In this vein, 1-Click Request Sizing adds a powerful tool to Kubernetes users’ arsenal, making it that much simpler to keep Kubernetes budgets in check.

Kubernetes Open source cluster pods Requests

Opinions expressed by DZone contributors are their own.

Related

  • GitOps: Flux vs Argo CD
  • Main Features and Benefits of Google Kubernetes Engine
  • Watching the Requests Go By: Reconstructing an API Spec with APIClarity
  • Dive Deep Into Resource Requests and Limits in Kubernetes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!