Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Anatomy of Connection Pooling

DZone's Guide to

The Anatomy of Connection Pooling

· Java Zone
Free Resource

Bitbucket is for the code that takes us to Mars, decodes the human genome, or drives your next car. What will your code do? Get started with Bitbucket today, it's free.

Introduction

All projects I’ve been working on have used database connection pooling and that’s for very good reasons. Sometimes we might forget why we are employing one design pattern or a particular technology, so it’s worth stepping back and reason on it. Every technology or technological decision has both upsides and downsides, and if you can’t see any drawback you need to wonder what you are missing.

The database connection life-cycle

Every database read or write operation requires a connection. So let’s see how database connection flow looks like:

Image

The flow goes like this:

  1. The application data layer ask the DataSource for a database connection
  2. The DataSource will use the database Driver to open a database connection
  3. A database connection is created and a TCP socket is opened
  4. The application reads/writes to the database
  5. The connection is no longer required so it is closed
  6. The socket is closed

You can easily deduce that opening/closing connections is quite an expensive operation. PostgreSQL uses a separate OS process for every client connection, so a high rate of opening/closing connections is going to put a strain on your database management system.

The most obvious reasons for reusing a database connection would be:

  • reducing the application and database management system OS I/O overhead for creating/destroying a TCP connection
  • reducing JVM object garbage

Pooling vs No Pooling

Let’s compare how a no pooling solution compares to HikariCP which is probably the fastest connection pooling framework available.

The test will open and close 1000 connections.

private static final Logger LOGGER = LoggerFactory.getLogger(<span class="skimlinks-unlinked">DataSourceConnectionTest.class</span>);
 
private static final int MAX_ITERATIONS = 1000;
 
private Slf4jReporter logReporter;
 
private Timer timer;
 
protected abstract DataSource getDataSource();
 
@Before
public void init() {
    MetricRegistry metricRegistry = new MetricRegistry();
    this.logReporter = Slf4jReporter
            .forRegistry(metricRegistry)
            .outputTo(LOGGER)
            .build();
    timer = <span class="skimlinks-unlinked">metricRegistry.timer("connection</span>");
}
 
@Test
public void testOpenCloseConnections() throws SQLException {
    for (int i = 0; i < MAX_ITERATIONS; i++) {
        Timer.Context context = <span class="skimlinks-unlinked">timer.time</span>();
        getDataSource().getConnection().close();
        <span class="skimlinks-unlinked">context.stop</span>();
    }
    logReporter.report();
}

The chart displays the time spent during opening and closing connections so lower is better.

NoPoolingVsConnectionPooling

The connection pooling is 600 times faster than the no pooling alternative. Our enterprise system consists of tens of applications and just one batch processor system could issue more than 2 million database connections per hour, so a 2 orders of magnitude optimization is worth considering.

Type No Pooling Time (milliseconds) Connection Pooling Time (milliseconds)
min 74.551414 0.002633
max 146.69324 125.528047
mean 78.216549 0.128900
stddev 5.9438335 3.969438
median 76.150440 0.003218

Why is pooling so much faster?

To understand why the pooling solution performed so well, we need to analyse the pooling connection management flow:

PoolingConnectionLifeCycle

Whenever a connection is requested, the pooling data source will use the available connections pool to acquire a new connection. The pool will only create new connections when there are no available ones left and the pool hasn’t yet reached its maximum size. The pooling connection close() method is going to return the connection to the pool, instead of actually closing it.

ConnectionAcquireRequestStates

Faster and safer

The connection pool acts as a bounded buffer for the incoming connection requests. If there is a traffic spike the connection pool will level it instead of saturating all available database resources.

The waiting step and the timeout mechanism are safety hooks, preventing excessive database server load. If one application gets way too much database traffic, the connection pool is going to mitigate it therefore preventing it from taking down the database server (hence affecting the whole enterprise system).

With great power comes great responsibility

All these benefits come at a price, materialized in the extra complexity of the pool configuration (especially in large enterprise systems). So this is no silver-bullet and you need to pay attention to many pool settings such as:

  • minimum size
  • maximum size
  • max idle time
  • acquire timeout
  • timeout retry attempts

My next article will dig into enterprise connection pooling challenges and how FlexyPool can assist you in find the right pool sizes.

Code available on GitHub.

If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, you just need to follow my blog.

Bitbucket is the Git solution for professional teams who code with a purpose, not just as a hobby. Get started today, it's free.

Topics:

Published at DZone with permission of Vlad Mihalcea, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}