How fast are Java sockets
How fast are Java sockets
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
How long a request/response takes and the rate requests can be performed in a Java application depends on a number of factors. The network, the network adapter, Java Socket and TCP layer, and what your application does.
Usually the last factor is your limitation. What if you wanted to test the overhead Java/TCP contributes, here is a way you can test this.
Latency and throughputThe Latency, in this test, is the Round Trip Time (sometimes written as RTT) This is the time between sending a request and receiving the response. This includes the delay on the client side, the transport and the delay on the server side.
The Throughput is a measure of how many request/responses can be performed in a given amount of time. How long each request/response takes is not measured, and has no impact unless its really large.
The ResultsThese results are for a fast PC doing nothing but pass data back and forth over loopback. This will be one of the limiting factors to using Java and TCP on your system. Your server running a real application on a real network will not be faster than this.
Your should test your system as the hardware used can make a big difference. (See The Code below)
Socket latency was 1/50/99%tile 5.6/5.8/7.0 us Socket Throughput was 170 K/s Threaded Socket Latency for 1/50/99%tile 6.0/8.5/10.7 us Threaded Socket Throughput was 234 K/s
The first pair of results, test just the Socket. Its single threaded and as you would expect, the throughput is the inverse of the latency. i.e. 170K * 5.8e-6 = 0.986 (about 1 thread busy) In the threaded test, the latency and throughput are higher. i.e. 234K * 8.5e-6 = 1.989 (about two threads were busy) Put another way, the throughput was double the inverse of the latency.
Can throughput be increasedThroughput can be increased further by batching and using multiple connections. This will increase/worsen latency but can give a significant increase in throughput. Between 2x to 10x can be expected with one server. Additional servers have the potential to increase throughput to the limits of your budget. ;)
However, once you have a real application on a real network, you will be lucky to achieve this throughput numbers on one server even with batching and multiple connections.
Opinions expressed by DZone contributors are their own.