The Ultimate Guide to Pods and Services in Kubernetes
Pods are the smallest deployable units containing one or more containers, while Services provide stable networking and load balancing to access pods.
Join the DZone community and get the full member experience.
Join For FreeKubernetes is a powerful container orchestration platform, but to harness its full potential, it's important to understand its core components, with pods and services playing foundational roles. In this article, we'll dive into what they are and how they work together to expose and manage access to applications running within a Kubernetes cluster.
What Is a Pod?
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
Before we create a pod, let’s check the API resources available in your Kubernetes cluster; you can use the kubectl api-resources
command. This command lists the API resources that are supported by the Kubernetes API server, including their short names, API groups, and whether they are namespaced or not. This is useful for understanding the capabilities of your cluster, especially when working with custom resources or exploring new Kubernetes features. The kubectl explain
command complements this by providing detailed information about individual resources.
Let's create a basic pod configuration file for a pod running a simple Nginx container.
Create a file named nginx-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Here is a brief explanation of the terms used:
- apiVersion: v1: Specifies the API version.
- kind: Pod: Indicates that this configuration is for a pod.
- metadata:name: The name of the pod.
- metadata: labels: Key-value pairs that can be used to organize and select the pod.
- spec:containers: A list of containers that will run in the pod.
- spec:containers:name: The name of the container.
- spec:containers:image: The container image to use (in this case, the latest Nginx image).
- spec:containers:ports: The ports to expose from the container.
Use kubectl
to apply the configuration file and create the pod:
kubectl apply -f nginx-pod.yaml
Check the status of the pod to ensure it has been created and is running:
kubectl get pods
You should see an output similar to this:
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 10s
Next, delete the pod:
kubectl delete pod nginx-pod
The output should look similar to the following:
kubectl get pod
No resources found in default namespace.
What Is a Service?
Creating a service for an Nginx pod in Kubernetes allows you to expose the Nginx application and make it accessible within or outside the cluster. Here's a step-by-step guide to creating a Service for an Nginx pod.
To ensure you have an Nginx pod running, if you don't already have one, create a YAML file named nginx-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Apply the Pod configuration:
kubectl apply -f nginx-pod.yaml
Create a YAML file named nginx-service.yaml
to define the Service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Here are a few things to note:
- selector: A label selector to match the Nginx pod.
- app: nginx: This should match the label in the Nginx pod.
- type: ClusterIP: The type of the Service. ClusterIP makes the Service accessible only within the cluster. You can also use NodePort or LoadBalancer to expose the Service externally.
Apply the Service configuration using kubectl
:
kubectl apply -f nginx-service.yaml
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.96.0.1 <none> 80/TCP 10s
Since the Service is of type ClusterIP, it is accessible only within the cluster. To access it from outside the cluster, you can change the Service type to NodePort or LoadBalancer.
To expose the Service externally using NodePort
, modify the nginx-service.yaml
file:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007 # Specify a node port in the range 30000-32767 (optional)
type: NodePort
Apply the updated Service configuration:
kubectl apply -f nginx-service.yaml
You can now access the Nginx application using the node's IP address and the node port (e.g., http://<node-ip>:30007).
Multi-Container Pods
Using a single container per pod provides maximum granularity and decoupling. However, there are scenarios where deploying multiple containers, sometimes referred to as composite containers, within a single pod is beneficial. These secondary containers can perform various roles: handling logging or enhancing the primary container (sidecar concept), acting as a proxy to external systems (ambassador concept), or modifying data to fit an external format (adapter concept). These secondary containers complement the primary container by performing tasks it doesn't handle.
Below is an example of a Kubernetes Pod configuration that includes a primary container running an Nginx server and a secondary container acting as a sidecar for handling logging. The sidecar container uses a simple logging container like BusyBox to demonstrate how it can tail the Nginx access logs.
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-logging-sidecar
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: log-sidecar
image: busybox:latest
command: ["sh", "-c", "tail -f /var/log/nginx/access.log"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
volumes:
- name: shared-logs
emptyDir: {}
You will then do the following:
- Save the YAML configuration to a file, e.g.,
nginx-with-logging-sidecar.yaml
. - Apply the configuration using
kubectl:kubectl apply -f nginx-with-logging-sidecar.yaml
. - Verify that the Pod is running:
kubectl get pods
. - Check the logs of the sidecar container to see the Nginx access logs:
kubectl logs -f <pod-name> -c log-sidecar
.
By following these steps, you will delete the existing Pod and apply the new configuration, setting up a Pod with an Nginx container and a sidecar container for logging. This will ensure the new configuration is active and running in your Kubernetes cluster.
Multi-container pods in Kubernetes offer several advantages, enabling more flexible and efficient application deployment and management.
Multi-Container Pod Patterns
Sidecar Pattern
Sidecar containers can enhance the primary application by providing auxiliary functions such as logging, configuration management, or proxying. This pattern helps extend functionality without modifying the primary container. A sidecar container can handle logging by collecting and forwarding logs from the main application container.
Ambassador Pattern
Ambassador containers act as a proxy, managing communication between the primary application and external services. This can simplify integration and configuration. An ambassador container might handle SSL termination or API gateway functions.
The Ambassador pattern involves using a sidecar container to manage communication between the primary application and external services. This pattern can handle tasks like proxying, load balancing, or managing secure connections, thereby abstracting the complexity away from the primary application container.
- Primary Application Container: A simple web server that makes HTTP requests
- Ambassador Container: Envoy proxy configured to manage requests to the external API
apiVersion: v1
kind: Pod
metadata:
name: app-with-ambassador
spec:
containers:
- name: web-app
image: python:3.8-slim
command: ["python", "-m", "http.server", "8000"]
volumeMounts:
- name: config-volume
mountPath: /etc/envoy
- name: envoy-proxy
image: envoyproxy/envoy:v1.18.3
args: ["-c", "/etc/envoy/envoy.yaml"]
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /etc/envoy
volumes:
- name: config-volume
configMap:
name: envoy-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-config
data:
envoy.yaml: |
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: external_service
http_filters:
- name: envoy.filters.http.router
clusters:
- name: external_service
connect_timeout: 0.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: external_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api.example.com
port_value: 80
This example demonstrates the Ambassador pattern in Kubernetes, where an Envoy proxy acts as an intermediary to manage external communication for the primary application container. This pattern helps abstract communication complexities and enhances modularity and maintainability.
Adapter Pattern
Adapter containers can modify or transform data between the primary application and external systems, ensuring compatibility and integration. For example, an adapter container can reformat log data to meet the requirements of an external logging service.
The Adapter pattern in Kubernetes involves using a sidecar container to transform data between the primary application container and external systems. This can be useful when the primary application requires data in a specific format that differs from the format provided by or required by external systems.
Suppose you have a primary application container that generates logs in a custom format. You need to send these logs to an external logging service that requires logs in a specific standardized format. An adapter container can be used to transform the log data into the required format before sending it to the external service.
- Primary Application Container: A simple application that writes logs in a custom format.
- Adapter Container: A container that reads the custom logs, transforms them into JSON format and sends them to an external logging service.
apiVersion: v1
kind: Pod
metadata:
name: app-with-adapter
spec:
containers:
- name: log-writer
image: busybox
command: ["sh", "-c", "while true; do echo \"$(date) - Custom log entry\" >> /var/log/custom/app.log; sleep 5; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/custom
- name: log-adapter
image: busybox
command: ["sh", "-c", "tail -f /var/log/custom/app.log | while read line; do echo \"$(echo $line | sed 's/ - / - {\"timestamp\": \"/;s/$/\"}/')\" >> /var/log/json/app.json; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/custom
- name: json-logs
mountPath: /var/log/json
- name: log-sender
image: busybox
command: ["sh", "-c", "while true; do cat /var/log/json/app.json | grep -v '^$' | while read line; do echo \"Sending log to external service: $line\"; done; sleep 10; done"]
volumeMounts:
- name: json-logs
mountPath: /var/log/json
volumes:
- name: shared-logs
emptyDir: {}
- name: json-logs
emptyDir: {}
This example demonstrates the Adapter pattern in Kubernetes, where an adapter container transforms data from the primary application container into the required format before sending it to an external system. This pattern helps integrate applications with external services by handling data format transformations within the sidecar container.
Summary
In summary, pods:
- Are the smallest deployable units in Kubernetes.
- Represent a single instance of a running process.
- Can contain one or more containers.
Services:
- Provide a stable IP address and DNS name for accessing pods.
- Enable load balancing across a set of pods.
- Facilitate service discovery within the cluster.
Opinions expressed by DZone contributors are their own.
Comments