Over a million developers have joined DZone.

Tuning NGINX: Part II

DZone's Guide to

Tuning NGINX: Part II

Here is the second half of this short series about taking an out-of-the-box instance of NGINX and tuning it to get more out of an already high-performance web server.

· Performance Zone
Free Resource

Evolve your approach to Application Performance Monitoring by adopting five best practices that are outlined and explored in this e-book, brought to you in partnership with BMC.

Don't miss out on Part I, read it here!

Worker Connections

The next parameter we are going to tune is the worker_connections configuration within NGINX. This value defines the maximum number of TCP sessions per worker. By increasing this value, the hope is that we can increase the capacity of each worker process.

The worker_connections setting can be found within the events block in the /etc/nginx/nginx.conf configuration file.

events {
        worker_connections 768;
        # multi_accept on;

The default setting for Ubuntu’s installation of NGINX is 768. For this first test, we will try to change this setting to 1024 and measure the impact of that change.

events {
        worker_connections 1024;
        # multi_accept on;

Like the previous configuration change, in order for this adjustment to take effect we must restart the NGINX service.

root@nginx-test:~# service nginx restart

With NGINX restarted, we can run another test with the ab command.

# ab -c 40 -n 50000 | grep "per second"
Requests per second:    6068.41 [#/sec] (mean)

Once again, our parameter change has resulted in a significant increase in performance. With just a small change in worker_connections, we were able to increase our throughput by 800 requests per second.

Increasing worker threads further

If a small change in worker_connections can add 800 requests per second, what affect would a much larger change have? The only way to find this out is to make the parameter change and test again.

Let’s go ahead and change the worker_connections value to 4096.

worker_rlimit_nofile 4096;

events {
        worker_connections 4096;
        # multi_accept on;

We can see the worker_connections value is 4096, but there is also another parameter whose value is 4096. The worker_rlimit_nofile parameter is used to define the maximum number of open files per worker process. The reason this parameter is now specified is because, when adjusting the number of connections per worker, you must also adjust the open file limitations.

With NGINX, every open connection equates to at least one or sometimes two open files. By setting the maximum number of connections to 4096, we are essentially defining that every worker can open up to 4096 files. Without setting the worker_rlimit_nofile to at least the same value as worker_connections, we may actually decrease performance, because each worker will try to open new files and would be rejected by the open file limitations or 1024.

With these settings applied, let’s go ahead and rerun our test to see how our changes affect NGINX.

# ab -c 40 -n 50000 | grep "per second"
Requests per second:    6350.27 [#/sec] (mean)

From the results of the ab test run, it seems we were able to add about 300 requests per second. While this may not be as significant of a change as our earlier 800 requests per second, this is still an improvement in throughput. As such, we will leave this parameter as is to move on to our next item.

Tuning for Our Workload

When tuning NGINX or anything else for that matter, it’s important to keep in mind the workload of the service being tuned. In our case, NGINX is simply serving static HTML pages. There is a set of tuning parameters that are very useful when serving static HTML.

http {

        open_file_cache max=1024 inactive=10s;
        open_file_cache_valid 120s;

The open_file_cache parameters within the /etc/nginx/nginx.conf file are used to define how long and how many files NGINX can keep open and cached in memory.

Essentially these parameters allow NGINX to open our HTML files during the first HTTP request and keep those files open and cached in memory. As subsequent HTTP requests are made, NGINX can use this cache rather than reopening our source files.

In the above, we are defining the open_file_cache parameter so that NGINX can cache a maximum of 1024 open files. However, of those files, the cache will be invalidated if they are not accessed within 10 seconds. The open_file_cache_valid parameter is defining a time interval to check if currently cached files are still valid; in this case, every 120 seconds.

These parameters should significantly reduce the number of times that NGINX must open and close our static HTML files. This means less overall work per request, which should mean a higher throughput. Let’s test our theory with another run of the ab command.

# ab -c 40 -n 50000 | grep "per second"
Requests per second:    6949.42 [#/sec] (mean)

With an increase of nearly 600 requests per second, the open_file_cache parameters have quite an effect. While this parameter might seem very useful, it is important to remember that this parameter works in our example because we are simply serving static HTML. If we were testing an application that was serving dynamic content every time, these parameters may result in rendering errors for end users.


At this point, we have taken an out-of-the-box NGINX instance, measured a baseline metric of 2957.93 requests per second, and tuned this instance to 6949.42 requests per second. As a result, we’ve gotten an increase of roughly 4000 requests per second. We did this by not only changing a few key parameters, but also experimenting with those parameters.

While this article only touched on a few key NGINX parameters, the methods used in this article to change and measure impact can be used with other common NGINX tuning parameters, such as enabling content caching and gzip compression. For more tuning parameters, check out the NGINX Admin Guide which has quite a bit of information about managing NGINX and configuring it for various workloads.

Learn tips and best practices for optimizing your capacity management strategy with the Market Guide for Capacity Management, brought to you in partnership with BMC.

tuning ,instance ,configuration ,nginx ,admin

Published at DZone with permission of Ben Cane, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.


Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}