How to Use GPU Nodes in Amazon EKS
In this article, we'll run GPU nodes in AWS EKS in seven simple steps via nvidia-driver and will check basic methods to debug it after the deployment.
Join the DZone community and get the full member experience.
Join For FreeRunning GPU workloads on Amazon EKS requires configuring GPU-enabled nodes, installing necessary drivers, and ensuring proper scheduling. Follow these steps to set up GPU nodes in your EKS cluster.
1. Create an Amazon EKS Cluster
First, create an EKS cluster without worker nodes using eksctl
(for simplicity, we don’t use Terraform/OpenTofu.):
eksctl create cluster --name kvendingoldo–eks-gpu-demo --without-nodegroup
2. Create a Default CPU Node Group
A separate CPU node group ensures that:
- Kubernetes system components (
kube-system
pods) have a place to run. - The GPU Operator and its dependencies will be deployed successfully.
- Non-GPU workloads don’t end up on GPU nodes.
Create at least one CPU node to maintain cluster stability:
eksctl create nodegroup --cluster kvendingoldo–eks-gpu-demo \
--name cpu-nodes \
--node-type t3.medium \
--nodes 1 \
--nodes-min 1 \
--nodes-max 3 \
--managed
3. Create a GPU Node Group
GPU nodes should have appropriate taints to prevent non-GPU workloads from running on them. Use an NVIDIA-compatible instance type (you can check all options at instances.vantage.sh, but typically it’s g4dn.xlarge
or p3.2xlarge
) for such nodes:
eksctl create nodegroup --cluster kvendingoldo–eks-gpu-demo \
--name gpu-nodes \
--node-type g4dn.xlarge \
--nodes 1 \
--node-taints only-gpu-workloads=true:NoSchedule \
--managed
A custom taint only-gpu-workloads=true:NoSchedule
guarantees that only pods with the same toleration configuration are scheduled on these nodes.
4. Install the NVIDIA GPU Operator
The NVIDIA GPU Operator installs drivers, CUDA, toolkit, and monitoring tools. To install it, use the following steps:
1. Create gpu-operator-values.yaml
:
tolerations:
- key: "only-gpu-workloads"
value: "true"
effect: "NoSchedule"
2. Deploy the gpu-operator
via Helm:
helm repo add nvidia https://nvidia.github.io/gpu-operator
helm repo update
helm install gpu-operator nvidia/gpu-operator -f gpu-operator-values.yaml
Pay attention to two things:
- YAML deployment of k8s-device-plugin shouldn’t be used for production.
- By using
gpu-operator-values.yaml
values, we set up tolerations forgpu-operator
daemonset; without that, nodes will not work, and you won’t be able to schedule GPU workloads there.
5. Verify GPU Availability
After deploying the GPU Operator, check if NVIDIA devices are correctly detected on GPU by the following command:
kubectl get nodes -o json | jq '.items[].status.allocatable' | grep nvidia
Check GPU Status on the Node Using AWS SSM (In Case of Issues)
If you need to manually debug a GPU node, connect using AWS SSM (Systems Manager Session Manager) instead of SSH.
Step 1: Attach SSM IAM Policy
Ensure your EKS worker nodes have the AmazonSSMManagedInstanceCore
policy:
aws iam attach-role-policy --role-name <NodeInstanceRole> \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
Step 2: Start an SSM Session
Find the Instance ID of your GPU node:
aws ec2 describe-instances --filters "Name=tag:eks:nodegroup-name,Values=gpu-nodes" \
--query "Reservations[].Instances[].InstanceId" --output text
Start AWS SSM session:
aws ssm start-session --target <Instance-ID>
Inside the node, check the GPU state:
lspci | grep -i nvidia
to check if the GPU hardware is detectednvidia-smi
to verify the NVIDIA driver and GPU status
If nvidia-smi
fails or the GPU is missing, it may indicate that:
- The GPU Operator is not installed correctly.
- The node does not have an NVIDIA GPU.
- The NVIDIA driver failed to load.
Check the official Nvidia documentation to solve these issues.
6. Schedule a GPU Pod
Deploy a test pod to verify GPU scheduling. This pod:
- Requests a GPU.
- Uses tolerations to run on GPU nodes.
- Runs nvidia-smi to confirm GPU access.
---
apiVersion: v1
kind: Pod
metadata:
name: kvendingoldo-gpu-test
spec:
tolerations:
- key: "only-gpu-workloads"
value: "true"
effect: "NoSchedule"
nodeSelector:
nvidia.com/gpu: "true"
containers:
- name: cuda-container
image: nvidia/cuda:12.0.8-base
command: ["nvidia-smi"]
resources:
limits:
nvidia.com/gpu: 1
7. Handling "Insufficient nvidia.com/gpu" Errors
Typically, users may face failing pods and errors like:
0/2 nodes are available: 1 Insufficient nvidia.com/gpu
This means that all GPUs are already allocated, or Kubernetes does not recognize available GPUs. Here, the following fixes may help.
Check GPU Allocations
kubectl describe node <gpu-node-name> | grep "nvidia.com/gpu"
If you don’t see any nvidia.com
labels on your GPU node, it means the operator isn’t working, and you should debug it. It is typically caused by taints or tolerations. Pay attention that the nvidia-device-plugin
pod should exist on each GPU node.
Verify the GPU Operator
Check the status of operator pods:
kubectl get pods -n gpu-operator
If some pods are stuck in Pending or CrashLoopBackOff, restart the operator:
kubectl delete pod -n gpu-operator — all
Restart kubectl
Sometimes the kubelet gets stuck. In such cases, logging into a node and restarting Kubelet may be helpful.
Scale Up GPU Nodes
Increase GPU node count:
eksctl scale nodegroup --cluster=kvendingoldo–eks-gpu-demo --name=gpu-nodes --nodes=3
Conclusion
Congrats! Your EKS cluster is all set to tackle GPU workloads. Whether you’re running AI models, processing videos, or crunching data, you’re ready to go. Happy deploying!
Published at DZone with permission of Alexander Sharov. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments