Develop an API Gateway Using Tarantool (Part 2): Adding Queues
Learn about developing an API Gateway with Tarantool as part of your microservices-based applications.
Join the DZone community and get the full member experience.
Join For FreeIn the previous article we shared a bit of theory, now it's time for some code. From a purely practical point of view, we need to solve a simple problem: we need to unite our zoo of services into a single client service. Clients will send HTTP requests there, while our intelligent proxy will redirect traffic and will return the results to clients. Of course, the innards of our system are not so simple (we are talking about a smart API Gateway, it’s not NGINX in proxy mode), but at a glance, this is basically what we’re building.
In our previous series, "Tarantool Queues," we implemented a particular use case related to authentication, now we're going to generalize the problem. The model remains similar, though: we still have the same bundle that includes an NGINX web server plus the Tarantool NGINX upstream module. We’ll just slightly alter our configuration file, nginx.conf, and we'll set up separate locations for our authentication server and our API Gateway.
(This is just a simple example, which in the course of development will be changed and supplemented. For more details, and as usual, please refer to the Tarantool NGINX upstream module documentation.)
events {
worker_connections 1024;
}
http {
# Add a Tarantool instance with a queue as a backend
upstream backend {
# Tarantool hosts
server 127.0.0.1:3301;
}
# Testing web server on port 8081
server {
listen 127.0.0.1:8081;
# Location for authentication server
location /auth {
# Module on
tnt_pass backend;
}
# Location for API Gateway
location /api_gateway {
# REST mode on
tnt_http_rest_methods get post put patch delete; # or all
tnt_http_methods all;
# This will help us later
#tnt_multireturn_skip_count 2;
#tnt_pure_result on;
# Pass http headers and uri
tnt_pass_http_request on parse_args;
# Module on
tnt_pass backend;
}
}
}
The services will vary in how they handle loads, some will respond quickly, while others will be rather slow. Our distributed system needs to work fast, though, thus every request to the API Gateway should be put into a queue. As mentioned above, in the previous series we implemented a queue server using a Tarantool instance and the Tarantool Queue module, but it was only for authentication purposes. Now you will see that it can do even more (with some alterations).
You can get installation instructions for Tarantool on the official website. Then you'll need to install the queues package. In Ubuntu, for example, you can simply run:
$ sudo apt-get install tarantool-queue
In the “Queues” series we programmed all of the necessary harnessing as stored procedures (Lua-based functions). That won’t do here, because our API Gateway must refer to a number of different services. Thus, we must adjust the queue server. We will add one more queue for requests to other services and, of course, the request bodies will include a token to be sent (authentication is one of the tasks solved by an API Gateway). Each service will authorize the client according to the transferred token. Below is an example, which will be updated as the other functional blocks of the project appear. Let’s name the file q12.lua; please read the Tarantool documentation to learn how to correctly place it and how to make a symlink.
-- Initiate an instance and create a queue, if one doesn’t already exist
box.cfg {
listen = 'localhost:3301'
}
queue = require 'queue'
queue.start()
box.queue = queue
box.once('grant', function()
box.schema.user.grant('guest', 'read,write,execute', 'universe')
end)
q1 = queue.create_tube('q1', 'fifottl', { if_not_exists = true })
q2 = queue.create_tube('q2', 'fifottl', { if_not_exists = true })
-- Make a functional harness for our API
-- Add non-authentication API Gateway requests to the queue
function api_gateway(req, tkn, ...)
return q2:put( { query=req, token = tkn }, { ttl = 60 } )
end
-- AUTHENTICATION
-- Register the new user
function registration(email)
return q1:put( { type = 1, login = email }, { ttl = 60 } )
end
-- Complete registration with a confirmation code
function complete_registration(email, code, password)
return q1:put( { type = 2, login = email, token = code, pass = password }, { ttl = 60 } )
end
-- Pass authentication
function auth(email, password)
return q1:put( { type = 3, login = email, pass = password }, { ttl = 60 } )
end
-- Set an account profile
function set_profile(user_id, profile_table)
return q1:put( { type = 4, id=user_id, profile = profile_table }, { ttl = 60 } )
end
-- Get user data from its ID session
function check_auth(session)
return q1:put( { type = 5, session_id = session }, { ttl = 60 } )
end
-- Kill the session
function drop_session(session)
return q1:put( { type = 6, session_id = session }, { ttl = 60 } )
end
-- Remove the account
function delete_user(user_id)
return q1:put( { type = 7, id=user_id }, { ttl = 60 } )
end
The API Gateway must accommodate various services with RESTful (and other) APIs. In other words, it needs to handle arbitrary HTTP requests with a URI, a method (“GET,” “PUT,” “POST,” “DELETE”), and so on. A URI uniquely identifies a service so that our smart proxy can forward the clients’ requests. The API Gateway should first queue, then parse and execute the requests.
So far our queue server is more like a training project; we will add some value in the next article. But to test the server, you can use any HTTP header. Let me remind you that we now have two locations in NGINX: “/auth” for authentication (as in the "Queues" series) and “/api_gateway” for API requests (calls). The stored procedure “api_gateway” will add query parameters to the queue, thus, you can make, for example, an HTTP POST to 127.0.0.1:8081/api_gateway/path/to/service/api, and send JSON in the body, or just send some parameters in the URI (like this: /api_gateway/path/to/service/api?q="query").
Here is a wget usage example:
So now we have a login/password authentication tool plus we can queue requests. And here is where the “fun” begins. By "fun," I mean queue parsing. In the “Queues” series, we coded in Python, using parallel programming techniques. Strictly speaking, we could skip that part this time, but now the tasks are far more complicated. The program needs to understand where and how it should send requests (since there are many services). In other words, we need a registry for routes and services. Besides, the services’ responses must be sent to the clients, and since it sometimes takes time for a service to respond, it’s hard to orchestrate response delivery in synchronous mode. We need to store the services’ responses somewhere too (here comes another Tarantool instance).
In its simplest form, though, the asynchronous interaction problem can be shifted to the client, which will periodically query the API Gateway, so sooner or later will get a response or an error message. Let’s assume for now that we would set it up as such. But even taking that for granted, we would still need to consider push notifications (i.e. when our API Gateway initiates a connection to the client), load balancing (for running multiple instances of the same service, etc.), API versioning (multiple clients working with different versions), and other things before reaching a simple but efficient API Gateway, suitable for use in real-life projects.
Opinions expressed by DZone contributors are their own.
Comments