Using Kubernetes to Manage Your Resources
See how you can use a Kubernetes cluster to control and enforce resource quotas and consumption for components such as load balance count and CPU.
Join the DZone community and get the full member experience.Join For Free
If HAL isn’t going to unlock the pod bay doors for you, you probably have an issue with your users running wild in a large production Kubernetes cluster. These and a range of other issues can be solved or mitigated through finer-grained control and resource utilization, which is where Kubernetes (K8S) quotas come into play.
By applying quotas to each namespace, especially when developers and teams are allowed to schedule their own pods or create new services in an ad hoc manner, you can control and limit the infrastructure costs in an autoscaled cluster. This helps avoid rampant resource hogging and constrain pods resource consumption, by defining quotas and limits separately for each namespace.
In any fresh Kubernetes cluster, we generally see two “namespace” resources, named “kube-system” and “default”. The “kube-system” namespace will contain the system pods, plus the cluster’s core components, like kube-apiserver, etcd, kube-controller-manager, kube-scheduler, dashboard, kube-dns and others, depending on your installation.
Over in the “default” namespace, you will find pods that you can schedule by default (if another namespace is not specified in pod manifest). To effectively partition your Kubernetes cluster for delegation of control to other teams, you need to create a selection of development namespaces per each team and can control their dev/test environments resource consumption with custom quotas, all running in the same cluster.
To apply quotas you need to define a ResourceQuota object, as in the following example (your manifest might look shorter of course, and contain only the few things you care about, such as CPU or load balancer count). You should also separate such manifests into several objects, to easily reuse them in future, by attaching to other namespaces:
Values and resources will vary (explained later), but for now, save the example to file named “quota.yml”, and attach this quota to your chosen namespace:
(The result “resourcequota dev-quota created” shows that the operation was completed successfully.)
If you already have some pods running, you can see exactly how much of each resource is being utilized and what are the limits, with this command:
It shows current consumption and limit values. If a pod exceeds its limits, it may be terminated by the system. Those millicpu numbers represent “1 virtual CPU core” divided by 1000, and depend on your cloud provider, click the link to read more about possible values for kubernetes compute-resources.
It is important to note that after you set a custom quota for CPU or memory resources to a namespace, you should either ensure that each deployment or pod manifest that you want to schedule has specified its “requests” and “limits” fields. Or, create a “default limit range” object, that will apply to every pod that attempts to be scheduled without specified limits and requests in the manifest.
To verify which default limits you already have, run:
If it looks like this:
Then you don’t have all needed defaults set in that namespace, and any pod without three values in its definition (memory request, memory limit, CPU limit) will fail to be scheduled with the error message:
(Why are three values needed in the pod manifest, and not all four? Because “Default Request” of 100 millicpu exists, as seen on this screenshot, and will be applied to any pod that doesn’t specify “cpu request” value in its definition.)
This failure to run a pod can happen if you activated some compute quotas, but didn’t specify them either in pod definition or in namespace “limits” object.
Compute quotas are:
1. CPU request
2. Memory request
3. CPU limit
4. Memory limit
When one, or more, are set in “limits” object, you can skip adding them to pod manifest and it will use needed defaults during scheduling.
Here is an example for a limits manifest that you can create, which takes care of default values for pods that didn’t specify them:
You can save it to file, named “default-limits.yml” for example, and create the object with:
And check if it was created with:
You should see something like the following image:
Now you will not see the error when scheduling pods without “
resources.limits” and “
resources.requests” fields specified.
After setting the quota, any resource that will cause quota thresholds to be exceeded for that namespace cannot be created.
Here is a full list of supported quota resources, that can be set:
Most of those names are self-explanatory, so there’s no need to describe each one in this article. To read more details about each, please check the official Kubernetes quota resource documentation page.
Published at DZone with permission of Oleg Chunikhin. See the original article here.
Opinions expressed by DZone contributors are their own.