DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Why Rate Limiting Matters in Istio and How to Implement It
  • Resource Management in Kubernetes
  • Protecting Go Applications: Limiting the Number of Requests and Memory Consumption
  • Rate Limiting Strategies for Efficient Traffic Management

Trending

  • DZone's Article Submission Guidelines
  • Vibe Coding With GitHub Copilot: Optimizing API Performance in Fintech Microservices
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  • Integration Isn’t a Task — It’s an Architectural Discipline
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Setting up Request Rate Limiting With NGINX Ingress

Setting up Request Rate Limiting With NGINX Ingress

Master request rate limits in Kubernetes with NGINX Ingress. Set up, test, and secure web apps using NGINX and Locust for peak performance.

By 
Rob Newsome user avatar
Rob Newsome
·
Sep. 07, 23 · Tutorial
Likes (7)
Comment
Save
Tweet
Share
14.0K Views

Join the DZone community and get the full member experience.

Join For Free

In today's highly interconnected digital landscape, web applications face the constant challenge of handling a high volume of incoming requests. However, not all requests are equal, and excessive traffic can put a strain on resources, leading to service disruptions or even potential security risks. To address this, implementing request rate limiting is crucial to preserve the stability and security of your environment.

Request rate limiting allows you to control the number of requests per unit of time that a server or application can handle. By setting limits, you can prevent abuse, manage resource allocation, and mitigate the risk of malicious attacks such as DDoS or brute-force attempts. In this article, we will explore how to set up request rate limiting using NGINX Ingress, a popular Kubernetes Ingress Controller. We will also demonstrate how to test the rate-limiting configuration using Locust, a load-testing tool.

To provide a clear overview of the setup process, let's visualize the workflow using a flow diagram:

flow diagram workflow

Now that we understand the purpose and importance of rate limiting let's dive into the practical steps.

Creating a Kubernetes Deployment

Before we can configure rate limiting, we need to have a sample application running. For the purpose of this demonstration, we will create a simple NGINX deployment in the default namespace of Kubernetes. NGINX is a widely used web server and reverse proxy known for its performance and scalability.

To create the Nginx deployment, save the following YAML content to a file named nginx-deployment.yaml:

YAML
 
apiVersion: apps/v1kind: Deploymentmetadata:name: nginx-deploymentspec:selector:matchLabels:app: nginxreplicas: 1template:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- containerPort: 80

Apply the deployment using the kubectl command:
Shell
 
kubectl apply -f nginx-deployment.yaml

The NGINX deployment is now up and running, and we can proceed to the next step.

Creating a Kubernetes Service

In order to expose the NGINX deployment to the outside world, we need to create a Kubernetes Service. The Service acts as a stable endpoint for accessing the NGINX deployment and ensures load balancing among the replicas.
Save the following YAML content to a file named nginx-service.yaml:
YAML
 
apiVersion: v1kind: Servicemetadata:name: nginx-servicespec:selector:app: nginxports:- protocol: TCPport: 80targetPort: 80type: ClusterIP

Apply the service using the kubectl command:
Shell
 
kubectl apply -f nginx-service.yaml

With the service in place, the NGINX deployment is accessible within the Kubernetes cluster.

Installing NGINX Ingress Controller

Next, we need to install the NGINX Ingress Controller. NGINX is a popular choice for managing ingress traffic in Kubernetes due to its efficiency and flexibility. The Ingress Controller extends Nginx to act as an entry point for external traffic into the cluster.

To install the NGINX Ingress Controller, we will use Helm, a package manager for Kubernetes applications. Helm simplifies the deployment process by providing a standardized way to package, deploy, and manage applications in Kubernetes.

Before proceeding, make sure you have Helm installed on your system. You can follow the official Helm documentation to install it.

Once Helm is installed, run the following command to add the NGINX Ingress Helm repository:

Shell
 
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

To install the NGINX Ingress Controller, use the following Helm command:
Shell
 
helm install my-nginx ingress-nginx/ingress-nginx

This command will deploy the NGINX Ingress Controller with default configurations. The controller will handle incoming requests and route them to the appropriate services within the cluster.
Now that we have the NGINX Ingress Controller installed let's proceed to configure rate limiting.

Creating a Kubernetes Ingress Resource

To apply rate limiting using the NGINX Ingress Controller, we will create a Kubernetes Ingress Resource. The Ingress Resource defines how incoming traffic should be routed and what rules should be applied.
Create a file named rate-limit-ingress.yaml and add the following YAML content:
YAML
 
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: rate-limit-ingressannotations:nginx.ingress.kubernetes.io/limit-rps: "10"nginx.ingress.kubernetes.io/limit-rpm: "100"nginx.ingress.kubernetes.io/limit-rph: "1000"nginx.ingress.kubernetes.io/limit-connections: "100"spec:rules:- http:paths:- path: /pathType: Prefixbackend:service:name: nginx-serviceport:number: 80

In this example, we set the following rate limits:
  • 10 requests per second (limit-rps)
  • 100 requests per minute (limit-rpm)
  • 1000 requests per hour (limit-rph)
  • 100 connections (limit-connections)
These limits can be adjusted according to your specific requirements.
Apply the Ingress Resource using the kubectl command:
Shell
 
kubectl apply -f rate-limit-ingress.yaml

The NGINX Ingress Controller will now enforce the specified rate limits for the NGINX service.

Introduction to Locust UI

Before we test the rate-limiting configuration, let's briefly introduce Locust, a popular open-source load-testing tool. Locust is designed to simulate a large number of concurrent users accessing a system and measure its performance under different loads.

Locust offers a user-friendly web UI that allows you to define test scenarios using Python code and monitor the test results in real time. It supports distributed testing, making it ideal for running load tests from Kubernetes clusters.

To install Locust locally for curiosity purposes, you can use the following pip command:

Shell
 
pip install locust

Once installed, you can access the Locust UI by running the following command:
Shell
 
locust

By default, the Locust UI is accessible here. However, please note that for the purposes of this article, we will deploy Locust in Kubernetes to test the rate-limiting configuration. We will cover this in the next section.

Running Locust from Kubernetes

To test the rate-limiting configuration, we will deploy Locust in the Kubernetes cluster and configure it to target the NGINX service exposed by the NGINX Ingress Controller.
First, we need to deploy Locust in Kubernetes. Save the following YAML content to a file named locust-deployment.yaml:
YAML
 
apiVersion: apps/v1kind: Deploymentmetadata:name: locust-deploymentspec:replicas: 1selector:matchLabels:app: locusttemplate:metadata:labels:app: locustspec:containers:- name: locustimage: locustio/locust:latestcommand:- locustargs:- -f- /locust-tasks/tasks.py- --host- http://nginx-service.default.svc.cluster.localports:- containerPort: 8089

This deployment will create a single replica of the Locust container, which runs the Locust load testing tool. The container is configured to target the Nginx service by specifying its hostname: http://nginx-service.default.svc.cluster.local.

Apply the deployment using the kubectl command:
Shell
 
kubectl apply -f locust-deployment.yaml

Next, we need to access the Locust UI. Since the Locust deployment is running inside the cluster, we can use port forwarding to access the UI on our local machine.

Run the following command to set up port forwarding:
Shell
 
kubectl port-forward deployment/locust-deployment 8089:8089

The Locust UI will now be accessible here.

Open a web browser and navigate here to access the Locust UI.

Now, let's set up a simple test scenario in Locust to verify the rate limiting.

  1. In the Locust UI, enter the desired number of users to simulate and the spawn rate.
  2. Set the target host to the hostname or public IP associated with the NGINX Ingress Controller.
  3. Add a task that sends a GET request to /index.html.
  4. Start the test and monitor the results.

Locust will simulate the specified number of users, sending requests to the NGINX service. The rate-limiting configuration applied by the NGINX Ingress Controller will control the number of requests allowed per unit of time.

Conclusion

Implementing request rate limiting is essential for preserving the stability and security of your web applications. In this article, we explored how to set up request rate limiting using NGINX Ingress, a popular Kubernetes Ingress Controller. We also demonstrated how to test the rate-limiting configuration using Locust, a powerful load-testing tool.

By following the steps outlined in this article, entry-level DevOps engineers can gain hands-on experience in setting up request rate limiting and verifying its effectiveness. Remember to adjust the rate limits and test scenarios according to your specific requirements and application characteristics.

Rate limiting is a powerful tool, but it's important to use it judiciously. While it helps prevent abuse and protect your resources, overly strict rate limits can hinder legitimate traffic and user experience. Consider the nature of your application and the expected traffic patterns, and consult with stakeholders to determine appropriate rate limits.

With the knowledge gained from this article, you are well-equipped to implement request rate limiting in your Kubernetes environment and ensure the stability

Kubernetes rate limit Requests

Opinions expressed by DZone contributors are their own.

Related

  • Why Rate Limiting Matters in Istio and How to Implement It
  • Resource Management in Kubernetes
  • Protecting Go Applications: Limiting the Number of Requests and Memory Consumption
  • Rate Limiting Strategies for Efficient Traffic Management

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!