Over a million developers have joined DZone.

The Simple Scalability Equation

· Performance Zone

Download Forrester’s “Vendor Landscape, Application Performance Management” report that examines the evolving role of APM as a key driver of customer satisfaction and business success, brought to you in partnership with BMC.

Queueing Theory

The queueing theory allows us to predict queue lengths and waiting times, which is of paramount importance for capacity planning. For an architect this is a very handy tool, since queues are not just the appanage of messaging systems.

To avoid system over loading we use throttling. Whenever the number of incoming requests surpasses the available resources, we basically have two options:

  • discarding all overflowing traffic, therefore decreasing availability
  • queuing requests and wait (for as long as a time out threshold) for busy resources to become available

This behaviour applies to thread-per-request web servers, batch processors or connection pools.

What’s in it for us?

Agner Krarup Erlang is the father of queueing theory and traffic engineering, being the first to postulated the mathematical models required to provisioning telecommunication networks.

Erlang formulas are modelled for M/M/k queue models, meaning the system is characterized by:

The Erlang formulas give us the servicing probability for:

This is not strictly applicable to thread pools, as requests are not fairly serviced and servicing times not always follow an exponential distribution.

A general purpose formula, applicable to any stable system (a system where the arrival rate is not greater than the departure rate) is Little’s Law.

L = \lambda W


L – average number of customers
λ – long-term average arrival rate
W – average-time a request spends in a system


You can apply it almost everywhere, from shoppers queues to web request traffic analysis.

This can be regarded as a simple scalability formula, for to double the incoming traffic we have two options:

  1. reduce by half the response time (therefore increasing performance)
  2. double the available servers (therefore adding more capacity)

A real life example

A simple example is a super-market waiting line. When you arrive at the line up you must pay attention to the arrival rate (e.g. λ = 2 persons / minute) and the queue length (e.g. L = 6 persons) to find out the amount of time you are going to spend waiting to be served (e.g. W = L / λ = 3 minutes).

A provisioning example

Let’s say we want to configure a connection pool to support a given traffic demand.
The connection pool system is characterized by the following variables:

Ws = service time (the connection acquire and hold time) = 100 ms = 0.1s
Ls = in-service requests (pool size) = 5

Assuming there is no queueing (Wq = 0):

\lambda = \frac{L}{W} =\ 50\frac{requests}{s}

Our connection pool can deliver up to 50 requests per second without ever queueing any incoming connection request.

Whenever there are traffic spikes we need to rely on a queue, and since we impose a fixed connection acquire timeout the queue length will be limited.


Since the system is considered stable the arrival rate applies both to the queue entry as for the actual services:

\lambda\ = \frac{Ls}{Ws}\ = \frac{5}{0.1}\ =\frac{Lq}{Wq} =\frac{10}{0.2}

This queuing configuration still delivers 50 requests per second but it may queue 100 requests for 2 seconds as well.

A one second traffic burst of 150 requests would be handled, since:

  • 50 requests can be served in the first second
  • the other 100 are going to be queued and served in the next two seconds

The timeout equation is:

Lspike\ = \lambda spike Tspike
T\ = \frac{Lspike}{ \lambda } = \frac{ \lambda spike Tspike }{ \lambda }
Lq = Lspike - Ls
Tq = T - 1

So for a 3 seconds spike of 250 requests per second:

λspike = 250requests/s
Tspike = 3s

The number of requests to be served is:

Lspike\ =\ 250\frac{requests}{s} 3s\ =\ 750requests
T\ = \frac{ 750requests }{ 50\frac{requests}{s} }\ =\ 15s
Lq = Lspike - Ls = 700requests
Tq = T - 1 = 14s

This spike would require 15 seconds to be fully processed, meaning a 700 queue buffer that takes another 14 seconds to be processed.


Little’s Law operates with long-term averages and it might not suit for various traffic burst patterns. That’s why metrics are very important when doing resource provisioning.

The queue is valuable because it buys us more time. It doesn’t affect the throughput. The throughput is only sensible to performance improvements or more servers.

But if the throughput is constant then queuing is going to level traffic bursts at the cost of delaying the overflown requests processing.

FlexyPool allows you to analyse all traffic data so you’ll have the best insight into your connection pool inner workings. The fail-over strategies are safe mechanisms for when the initial configuration assumptions don’t hold on any more.

If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, you just need to follow my blog.

See Forrester’s Report, “Vendor Landscape, Application Performance Management” to identify the right vendor to help IT deliver better service at a lower cost, brought to you in partnership with BMC.


Published at DZone with permission of Vlad Mihalcea, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}