{{announcement.body}}
{{announcement.title}}

Determine Payload Size Using Nginx

DZone 's Guide to

Determine Payload Size Using Nginx

We wanted to evaluate which protocol is suitable for our micro-service architecture as our payload size was getting increased quite considerably.

· DevOps Zone ·
Free Resource

Nowadays any application is API based or distributed where one request is not just served by one application but a series of applications either parallel or serial. Application talks to each other using various protocols like REST, RPC, WebSocket, and the payload formats varies from JSON / XML / Binary / propriety. 

Recently we wanted to evaluate which protocol is suitable for our micro-service architecture as our payload size was getting increased quite considerably. To determine that we want to know maximum payload size and minimum, as our payload size is directly proportional to items selected. 

Our tech stack is open source and our deployment is containerized polyglot micro-service architecture using docker swarm. Some of the services are using REST, and some are using gRPC (protobuff) and the payload is JSON, JSON+gzip, and binary. So to determine the payload size, we need a common proxy that will able to serve HTTP request and gRPC request. After looking at all tools, we decided to front our swarm service using Nginx so that we can capture payload size and uniquely logs each service request. 

overlay network

Let’s start how to configure Nginx for the same. The first challenge is to serve all types of the payload so we are going to use stream rather than HTTP. For each service, we are going to dedicate the incoming ports and upstream ports. There will be separate log files for each service as Nginx can only log the upstream IP address. If we combine the logs of all service, it will be difficult to identify based on IP as these will be container IP which can change if we restart or scale.

Nginx configuration:

Plain Text
 




xxxxxxxxxx
1
25


 
1
worker_processes  1;
2
events {
3
    worker_connections  1024;
4
}
5
 
          
6
stream {
7
    upstream stream_backend_service1 {
8
        server swarm_service1:8080;
9
    }
10
    upstream stream_backend_service2 {
11
        server swarm_service2:9090;
12
    }   
13
    log_format combined '$upstream_addr = Request Payload Size - $upstream_bytes_sent                           Response Payload Size - $upstream_bytes_received';
14
    
15
    server {
16
        listen            127.0.0.1:8090;
17
        proxy_pass        stream_backend_service1;
18
        access_log  logs/payloadsize.log  combined;
19
    }
20
    server {
21
        listen            127.0.0.1:8091;
22
        proxy_pass        stream_backend_service2;
23
    access_log  logs/payloadsize_sever2.log  combined;
24
    }
25
}



If you go to the respective logs following will get printed.

Plain Text


Our swarm_service1 is using REST over HTTP and swarm_service2 is using gRPC.
Hope this will help others who want to discover Nginx benefits.

Happy coding.

Topics:
devops, nginx, payload

Published at DZone with permission of Milind Deobhankar . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}