DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Load-Balancing Minecraft Servers with Kong Gateway
  • Importance Of Anypoint Dedicated Load Balancer in MuleSoft Ecosystem
  • Using Global Cloud Load Balancer to Route User Requests to App Instances
  • Journey of HTTP Request in Kubernetes

Trending

  • AI’s Role in Everyday Development
  • Docker Base Images Demystified: A Practical Guide
  • A Developer's Guide to Mastering Agentic AI: From Theory to Practice
  • How Large Tech Companies Architect Resilient Systems for Millions of Users
  1. DZone
  2. Software Design and Architecture
  3. Performance
  4. Nginx: Reverse Proxy and Load Balancing

Nginx: Reverse Proxy and Load Balancing

In this article, I would like to share my experience of setting up Nginx for load balancing and reverse proxy with SSL termination in Nginx.

By 
Praveen KG user avatar
Praveen KG
·
Mar. 17, 21 · Tutorial
Likes (8)
Comment
Save
Tweet
Share
38.8K Views

Join the DZone community and get the full member experience.

Join For Free

You might have seen many articles on the internet regarding Nginx and how we can use Nginx for load balancing and reverse proxy.  In this article, I would like to share my experience of setting up Nginx for load balancing and reverse proxy with SSL termination in Nginx.

Are Reverse Proxy and Load Balancer Similar?

Though both terminologies sound similar, a reverse proxy accepts requests from clients and forwards the request to servers for the actual processing.  The reverse proxy relays the results from servers to the client. 

A load balancer distributes client requests among a group of backend servers and then relays the response from the selected server to the appropriate client. 

So, Do We Need Both?

Yes, in most of the use cases we need to use both. 

Load balancers help to eliminate a single point of failure, making the website/API more reliable by allowing the website/API to be deployed in multiple backend servers. Also, load balancers enhance the user experience by reducing the number of error responses for clients either by detecting when one of the backend servers goes down and diverting requests away from that server to the other servers in the backend pool or by allowing an application health check. This is when the load balancer sends separate health-check requests in the frequent interval and determines a server is healthy based on the specified type of response like response code 200 or response messages "OK".

Load balancer makes sense when we have multiple backend servers because it often makes sense to deploy a reverse proxy even with just one web server or application server. The benefits of reverse proxy are the following: 

  • Security: With a reverse proxy, clients will not have information about our backend servers, so there is no way any malicious client cannot access them directly to exploit any vulnerabilities. Many reverse proxy servers provide features that help protect backend servers from DDoS attacks, IP address blacklisting to reject traffic from particular client IP, or Rate Limit, which is limiting the number of connections accepted from each client.
  • Scalability and high availability: Along with load balancing, the reverse proxy allows to scale/ add/remove servers to backend pool based on traffic volume as clients see only the reverse proxy’s IP address. This makes you achieve high availability for your websites/API.
  • Optimized SSL Encryption/Decryption: Implementing SSL/TLS can significantly impact backend server performance because the SSL handshake and encrypting/decrypting operation for each request is quite CPU-intensive. 

How Do You Set Up Nginx as a Reverse Proxy and Load Balancer?

Nginx supports proxying requests to servers using HTTP(s), FastCGI, SCGI, and uwsgi, or Memcached protocols through separate sets of directives for each type of proxy. In this article, I will use the HTTP protocol.

By default, Nginx uses proxy_pass directive to hand over requests to a single server that can communicate using HTTP. You can find the proxy_pass directive in location contexts.

Plain Text
 




x


 
1
location /api/card {    
2

          
3
    proxy_pass http://example.com;
4

          
5
}



When a request for API/card/report is handled by this block, the request URI will be sent to the example.com server as http://example.com/api/card/report. 

So, if your backend server supports multiple operations, you need to set up location context as the code below:

Plain Text
 




x


 
1

          
2
location /api/creditcard{    
3

          
4
    proxy_pass http://example.com;
5

          
6
}
7

          
8
location /api/debitcard {    
9

          
10
    proxy_pass http://example.com;
11

          
12
}
13

          
14

          



Set-Up Load Balancing

Above, I mentioned how to do a simple HTTP proxy to a single backend server. Nginx allows us to easily scale this configuration out by specifying entire pools of backend servers that we can pass requests to.

To implement multiple backend pool servers, Nginx provides an upstream directive, and we can scale out our infrastructure to handle the high volume of traffic with almost no effort. The upstream directive must be set in the HTTP context of your Nginx configuration.

Plain Text
 




x


 
1

          
2
upstream app1 {    
3

          
4
     server host1.example.com;    
5

          
6
     server host2.example.com;    
7

          
8
     server host3.example.com;
9

          
10
}
11

          
12

          
13

          
14
server {    
15

          
16
      listen 80;    
17
      server_name example.com;
18

          
19
      location /api/card {       
20
        proxy_pass http://app1;   
21
    }
22

          
23
}



In the above example, we’ve set up an upstream context called app1. Once defined, we can use app1 in the proxy_pass as http://app1, and any request made to example.com/api/card will be forwarded to the pool (app1) we defined above. Within that pool, a host is selected by applying a configurable algorithm. By default, Nginx uses a round-robin selection process.

Changing the Upstream Balancing Algorithm

Nginx opensource supports 4 load balancing methods. They are defined in the subsections below.

Round Robin: This is a default method and requests are distributed evenly across the servers with server weights taken into consideration.  

Plain Text
 




xxxxxxxxxx
1
10
9


 
1

          
2
upstream backend { 
3

          
4
         # no load balancing method is specified for Round Robin
5
          server host1.example.com;
6
          server host2.example.com;
7
}



Least Connections: A request is sent to the server with the least number of active connections with server weights taken into consideration again. 

Plain Text
 




xxxxxxxxxx
1
11
9


 
1
upstream backend {
2

          
3
   least_conn;
4
   server host1.example.com;
5
   server host2.example.com;
6

          
7
}



IP Hash: The server to which a request is sent is determined from the client's IP address. In this case, either the first three octets of the IPv4 address or the whole IPv6 address are used to calculate the hash value. The method guarantees that requests from the same address get to the same server unless it is not available.

Plain Text
 




xxxxxxxxxx
1


 
1
   upstream backend {
2

          
3
    ip_hash;
4
    server host1.example.com;
5
    server host2.example.com;
6
}    



Generic Hash: The server to which a request is sent is determined from a user‑defined key which can be a text string, variable, or a combination. For example, the key may be a paired source IP address and port, or a URI as in the example below:

Plain Text
 




xxxxxxxxxx
1
10
9


 
1

          
2
upstream backend {
3

          
4
    hash $request_uri consistent;
5
    server host1.example.com;
6
    server host2.example.com;
7
}



Set Up an HTTPS Server

To set up an HTTPS server in the Nginx.conf file, we need to add the below SSL parameter to the listen directive in the server block, and we also need to configure the locations of the server certificate and private key files like so:

Plain Text
 




x
21


 
1
server {
2

          
3
    listen              443 ssl;
4

          
5
    server_name         www.example.com;
6

          
7
    ssl_certificate    /etc/nginx/ssl/server.crt;
8

          
9
    ssl_certificate_key /etc/nginx/ssl/server.key;
10

          
11
    ssl_client_certificate /etc/nginx/ssl/server-truststore.pem;
12

          
13
    #set up mutual ssl, if you want to set up mutual authentication, configure the       value as 'on' else 'optional'.
14

          
15
    ssl_verify_client   on; 
16

          
17
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
18

          
19
    ssl_ciphers         HIGH:!aNULL:!MD5;
20

          
21
  }



Now, we understood the reverse proxy and load-balancing support in Nginx. To configure both, create a nginx.conf file in the /etc/nginx directory, and add the below configuration.

Configuration screenshot.

Nginx. conf

Plain Text
 




xxxxxxxxxx
1
33


 
1
events {}
2

          
3
http {
4

          
5
    upstream backend {
6

          
7
       server host1.example.com:8080;
8
       server host2.example.com:8080 max_fails=2 fail_timeout=5s;
9

          
10
    }
11

          
12
    ssl_session_cache   shared:SSL:10m;
13
    ssl_session_timeout 10m;
14

          
15
    server{
16

          
17
            listen 443 ssl;
18
            keepalive_timeout  70;
19
            ssl_certificate /etc/nginx/ssl/server.crt;
20
            ssl_certificate_key /etc/nginx/ssl/server.key;
21
            ssl_client_certificate /etc/nginx/ssl/server-truststore.pem;
22
            #setting for mutual ssl, if you want to set up mutual authentication,                   configure the value as 'on' else 'optional'.
23
            ssl_verify_client   on;
24

          
25
            location /services/api1 {
26
                proxy_pass http://backend;
27
            }
28

          
29
          location /services/api2 {
30
                proxy_pass http://backend;
31
            }
32
    }
33
}



Note: Nginx opensource does not support health probe checks. To achieve health probe functionality, one can configure the max_fails and fail_timeout attribute to the server in the upstream directive. However, the Nginx+, supports the health probe endpoint.       

Load balancing (computing) Plain text Requests

Opinions expressed by DZone contributors are their own.

Related

  • Load-Balancing Minecraft Servers with Kong Gateway
  • Importance Of Anypoint Dedicated Load Balancer in MuleSoft Ecosystem
  • Using Global Cloud Load Balancer to Route User Requests to App Instances
  • Journey of HTTP Request in Kubernetes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!