Silly Kubectl Trick #6: Label It to Enable It
If the admin-ui from our sample deployment is off in the weeds, and we want to delete/recreate all of the pods: $ kubectl delete $(kid pod -l env=admin) - Just remember: with great power, comes great responsibility.
Join the DZone community and get the full member experience.Join For Free
The building block of almost all Kubernetes deployments is the pod – one or more containers sharing a network stack. Pods are where the magic happens, where we get our logs, and where we spend most of our time troubleshooting outages and malfunctions.
However, we rarely create pods ourselves. Instead, we rely on higher level constructs like ReplicaSets and Deployments to create them for us.
Unfortunately, ReplicaSets (and by extension, Deployments) are just awful at naming pods.
This wouldn't matter too much if we never had to reference a pod by name. But that's exactly what we have to give
kubectl for any command involving a pod, or one of its containers. If we want to review logs, execute arbitrary commands inside the container, or even see how the pod is doing, we're going to be constantly running
kubectl get pods and copying / pasting those names around.
(Note: you can find a Kubernetes resource spec on the GitHub repository that accompanies this blog series, if you want to play along at home.)
Unless we use labels!
Labels are key+value pairs that we, as operators, get to assign to the things we deploy on Kubernetes. Here's how we specify pod labels in a Deployment definition:
While labels are used internally by other parts of Kubernetes (like Deployments, Services, and the like), we can also benefit from them with the
kubectl get command and its
-l flag. Combine that with some data extraction, and we can do some pretty amazing things:
The demo.yml definitions that we're using define some labels that we can use to filter our queries:
- env - We set this to either
prod(stuff that goes down and makes our lives unbearable) or
admin(stuff that goes down and makes our lives inconvenient).
- app - This is mostly a unique-per-deployment label that we use for selectors to wire up the Deployment object through its ReplicaSet to its constituent Pods.
- service - Like app, we use this label to identify the pods that make up a named service, for wiring up Service definitions.
- role - In a front-end / back-end paradigm, which end does this piece belong to?
Armed with these semantics, we can run some neat sub-queries against our Pods:
Q: How is prod doing today?
Q: What images are we running on the backend?
Note: This one uses the
image.fmt output format from Silly Kubectl Trick #2.
Using subshell expansion in Bash and Zsh, we can combine these calls with other commands that deal with individual pods:
Powerful, but a little less convenient than I would like. On my machine, I put the common prefix command from the above subshells into a small script in my
That shortens my
logs calls to just:
This also works for other resource objects, like Volumes, Services, etc.
You can do more than just look. For example, if the admin-ui from our sample deployment is off in the weeds, and we want to delete / recreate all of the pods:
Just remember: with great power, comes great responsibility.
Published at DZone with permission of . See the original article here.
Opinions expressed by DZone contributors are their own.