Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Set Up an API Endpoint Distributed Over Multiple Servers Using NGINX Upstreams

DZone's Guide to

Set Up an API Endpoint Distributed Over Multiple Servers Using NGINX Upstreams

In this post, we'll learn how to set up a simple API endpoint on your application using the open source NGINX proxy server.

· Integration Zone ·
Free Resource

SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.

This post is especially helpful if you are writing a service that utilizes other services on the internet that are rate limited by IP address, an example of this is the whois information service.

Today I am going to show you how to set up a simple API endpoint on your application using the open source NGINX proxy server. We are going to make this endpoint (example.com/api) span across multiple servers. For what we need, you can just set up the one IP address and leave it at that if you only have a normal API setup to do.

First, we need to set up a few servers. For my example, I’m going to set up a production front-end server that will host my React.js application and I am going to set up 2 or 3 more production servers to run my Node.js/Express APIs.

I have set up 4 CentOS 7.4 Linux servers for this example.

Now that I have my front-end production server running, I need to first update the software on the box. To do this I will run the command below:

yum update

This will update the software to the latest versions for security and bug fixes, etc.

Now we need to install the NGINX proxy server. To do so, use the command below:

yum install nginx -y

If the above command fails because it cannot find the package requested, run the following command to install the epel release repo to the system:

yum install epel-release -y

If the install of the epel release was successful, go ahead and follow the NGINX install command again:

yum install nginx -y

Once NGINX is installed we need to set the service running:

service nginx start

Then we need to have the system start the service up again on reboot:

systemctl enable nginx

So now if we visit the IP address of the server we should see the default NGINX web page, like so:

Default NGINX Page

So far, so good. Now let's navigate the NGINX config folder and start playing with some configuration files. On CentOS this can be found at the following location:

/etc/nginx

For other Linux distributions, please visit the NGINX documentation for more information on where to locate this folder.

For this example, I am not going to be setting up virtual host containers properly (in separate files inside the correct folders). I am going to use the default one that ships with NGINX just to show you for this example.

We need to create a new upstream config block called api_group. This block of code will contain all the IP addresses of all our API servers; this is a very basic use of the upstream functionality, there is a lot more you can do with this, but for now, this is all we need. See the example below:

upstream api_group {
    server 111.111.111.111; // internal private IP address
    server 222.222.222.222; // internal private IP address
}

Once we have that setup, we just need go into the server block and add a location for the /api  endpoint and map it to our upstream, see the code below for an example:

server {
    …rest of server config
    location /api {
        proxy_pass http://api_group/;
    }
}

With this in place, all we need to do is restart the NGINX service using the command below:

service nginx restart

Now I have set up two more servers for my API application, I haven’t shown you the setup here but its pretty much the same setup as the frontend production server I have shown you here, all I have is an index.html page on each API server, one with A and one with B written inside.

When I hit the example.com/api endpoint, it gives me A or B on each page refresh, this shows me that NGINX is doing a round robin on the servers I listed in my upstream block earlier (a type of basic load balancing) so if we were to replace these files with our API we would have just doubled our whois lookup limit (in theory anyway) due to the lookups being IP rate limited.

I will be keeping an eye on this going forward and seeing how well it works!

If there is any way I can improve this post or if I have done anything wrong, leave a comment and let me know.

Thanks!

With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial!

Topics:
nginx ,upstream ,integration ,api development

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}