DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • The Production-Ready Kubernetes Service Checklist
  • Optimizing Prometheus Queries With PromQL
  • Demystifying Kubernetes in 5 Minutes
  • Strengthening Your Kubernetes Cluster With Pod Security Admission

Trending

  • Breaking Bottlenecks: Applying the Theory of Constraints to Software Development
  • Unlocking AI Coding Assistants Part 3: Generating Diagrams, Open API Specs, And Test Data
  • Testing SingleStore's MCP Server
  • Integrating Security as Code: A Necessity for DevSecOps
  1. DZone
  2. Software Design and Architecture
  3. Containers
  4. Assigning Pods to Nodes Using Affinity Rules

Assigning Pods to Nodes Using Affinity Rules

Affinity and anti-affinity are Kubernetes features to help you manage the schedule of your applications. Explore two scenarios where to use these features.

By 
Rafael Natali user avatar
Rafael Natali
·
Sep. 27, 24 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
4.9K Views

Join the DZone community and get the full member experience.

Join For Free

This article describes how to configure your Pods to run in specific nodes based on affinity and anti-affinity rules. Affinity and anti-affinity allow you to inform the Kubernetes Scheduler whether to assign or not assign your Pods, which can help optimize performance, reliability, and compliance.

There are two types of affinity and anti-affinity, as per the Kubernetes documentation:

  • requiredDuringSchedulingIgnoredDuringExecution: The scheduler can't schedule the Pod unless the rule is met. This functions like nodeSelector, but with a more expressive syntax.
  • preferredDuringSchedulingIgnoredDuringExecution: The scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the Pod.

Let's see a couple of scenarios where you can use this configuration.

Scenario 1: Kafka Cluster With a Pod in a Different K8s Worker Node

In this scenario, I'm running a Kafka cluster with 3 nodes (Pods). For resilience and high availability, I want to have each Kafka node running in a different worker node. 

Each Kafka node running in a different worker node.

In this case, Kafka is deployed as a StatefulSet, so the affinity is configured in the .template.spec.affinity field:

spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - kafka
        topologyKey: kubernetes.io/hostname


In the configuration above, I'm using podAntiAffinity because it allows us to create rules based on labels on Pods and not only in the node itself. In addition to that, I'm setting the requiredDuringSchedulingIgnoredDuringExecution because I don't want two Kafka Pods running in the same cluster at any time. The labelSelector field look for the label app=kafka in the Pod and topologyKey is the label node. Pod anti-affinity requires nodes to be consistently labeled, in other words, every node in the cluster must have an appropriate label matching topologyKey.

In the event of a failure in one of the workers (considering that we only have three workers), the Kafka Pod will be in a Pending status because the other two nodes already have a Kafka node running. 

Kafka Pod in a Pending status because the other two nodes already have a Kafka node running

If we change the type from requiredDuringSchedulingIgnoredDuringExecution to preferredDuringSchedulingIgnoredDuringExecution, then kafka-2 would be assigned to another worker.

requiredDuringSchedulingIgnoredDuringExecution changed to preferredDuringSchedulingIgnoredDuringExecution, then kafka-2 would be assigned to another worker.

Scenario 2: Data Science Applications That Must Run on Specific Nodes

Conceptually, node affinity is similar to nodeSelector where you define where a Pod will run. However, affinity gives us more flexibility. Let's say that in our cluster we have two worker nodes with GPU processors and some of our applications must run in one of these nodes.

In the Pod configuration we configure the .spec.affinity field:

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: processor
            operator: In
            values:
            - gpu-east1
            - gpu-west1


In this example, the following rules apply:

  • The node must have a label with the key processor and the value of that label must be either gpu-east1 or gpu-west1.

The node must have a label with the key processor and the value of that label must be either gpu-east1 or gpu-west1

In this scenario, the Data Science applications will be assigned to workers 1 and 2. Worker 3 will never host a Data Science application.

Summary

Affinity and anti-affinity rules provide us flexibility and control on where to run our applications in Kubernetes. It's an important feature to create a highly available and resilient platform. There are more features, like weight, that you can study in the official Kubernetes documentation.

Kubernetes cluster kafka Label pods

Published at DZone with permission of Rafael Natali. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • The Production-Ready Kubernetes Service Checklist
  • Optimizing Prometheus Queries With PromQL
  • Demystifying Kubernetes in 5 Minutes
  • Strengthening Your Kubernetes Cluster With Pod Security Admission

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!