After the evolution of distributed systems, microservices-based applications attracted the interest of nearly every organization wanting to survive market competition.
After the evolution of distributed systems, microservices-based applications attracted the interest of nearly every organization wanting to grow with time and survive the market competition. Microservices allows us to scale and manage systems easily. Development time reduced due to distributed effort among many teams and time-to-market new features reduced significantly.
Due to distributed nature, communication among different components is over the network. And there are so many factors that can affect communication, either it can be security, added latency, or abrupt termination of ongoing communication, leading to increased infrastructure cost. Hence either we can fix the network, which exists with numerous problems or we can architect our system to be resilient and reliable over time.
Services communicate with each other in a distributed environment using network protocols. Our solutions to make our system resilient, reliable, and faster lies with correct protocol usage too. We have various protocols for different needs and for different network layers (i.e., OSI network model). When we talk about service to service communication or browser to service communication, HTTP usually adopted as a De facto standard. All REST-based services adopt this as a standard.
But whether our too much relying on HTTP is correct? HTTP has been created for a different purpose, they are created for browsers to back end server communication to retrieve some data using the request-response model. And in the current world, it is being used for even inter-services communications, reducing our system’s real power.
Here, I’ll cover issues with HTTP and possible available solutions for common use cases. And later I’ll provide some details about RSocket, which can alleviate many of these existing issues and can make our application fully reactive.
Blocking Communication
HTTP is an application layer protocol in the OSI network model. Over time HTTP has evolved and provided different versions to adopt. Let’s go through them and visualize the issues that lie with them.
HTTP/1.0
If a client service wanted to retrieve some data from another service, it will first open a connection with it and then send a request over it to the server. The server will send a response and closes the connection. For each request, opening a new connection is a must. Hence lots of additional overhead for each request-response cycle and the result will be, slow communication.
Fig.1 shows the working of HTTP/1.0 between client and server machines. It also shows the multiple connections required for each request-response cycle.
Fig1. HTTP/1.0
HTTP/1.1 improves over HTTP/1.0 and provided a solution in the form of persistent connection and introduced the feature of ‘Pipelining’. Due to this, a client can send multiple requests over a single connection, which can remain alive for a configured time only.
Though the situation relatively improved, this still has a problem, popularly known as ‘Head of Line Blocking’. Due to this issue, if there are multiple requests on a single connection, then those will be queued to the server and will be responded to in the same order only. And, if your client is fast in generating requests or the server is slow in responding then that will block other requests to be processed. Hence, congestion in the network causing unnecessary delay.
Fig2. Show the HTTP/1.1 working where Request#2 is facing the ‘Head Of Line Blocking’ due to Request#1. Until the server process the request and respond it with Response#1, Request#2 will wait for processing at the Server end.
HTTP/2
HTTP/2 improves over HTTP/1.1 and introduced the new feature of ‘Multiplexing’. This allowed sending multiple requests as separate streams to the server over a single connection and the server will send responses over the streams back to the client. This way inter-service communication is now relatively faster.
As shown in Fig.3, HTTP/2 uses multiplexed channel over a single connection. Over the same channel processed responses are sent which can be interleaved between other response frames. Any delayed or blocked response won’t affect other responses.
Although advantages not limited to a specific industry, better streaming features increase the scope of RSocket in real-time chat applications, GPS-based applications, online education with multimedia chats, and collaborative drawings.
Any Cloud-based application where a lot of data is exchanged via inter-service communication.
In distributed systems, wanted to reduce latency and make systems faster.
In distributed Systems. wanted to reduce the operational cost by better CPU utilization and increasing the memory efficiency.
In applications, where server wanted to query a specific set of clients to debug some issue at run-time. Due to bi-directional behavior, it is possible to send requests from the server to the client on an existing connection.
The Fire-and-Forget feature can be used for noncritical tracing purposes.
Fig.4 shows a possible usage of RSocket protocol, where multiple microservices implemented using various language can communicate with each other on the different transport layer of choice. RSocket which gives application layer semantics makes interaction easier. Microservices are now not tightly coupled with protocols semantics and can use a simple and consistent RSocket interface.
mobile appFlow control (data)Protocol (object-oriented programming)RSocketmicroserviceConnection (dance)RequestsBinary protocolSemantics (computer science)
Opinions expressed by DZone contributors are their own.
Related
Serverless Patterns: Web
Why and When to Use GraphQL
How to Choose the Right IoT Connectivity Protocol for Your Connected Device
Comments