The ABCs of JDBC, Part 5 - Transactions

DZone 's Guide to

The ABCs of JDBC, Part 5 - Transactions

· Java Zone ·
Free Resource

In this week's installment of our JDBC FAQ series, we take a look at how transactions are managed in JDBC, inlcuding topics like optimistic and pessimisstic concurrency, database locking, Savepoints, and the standard isolation levels defined by JDBC.

Read the other parts in DZone's JDBC FAQ series:


What anomalies can occur without proper transaction and locking?

When we deal with reading and modifying data, we may face some dilemmas regarding the information integrity and validity. These dilemmas arise because of database operations colliding with each other; for example two write operations or a read and a write operation colliding together.

These  anomalies are listed below:

  1. Dirty Reads: Technically speaking a dirty read happens when a transaction reads some data that is being changed by another transaction which is not committed yet.
  2. Non-Repeatable Reads:  A Non-Repeatable Read happens when a transaction reads some records that another transaction is modifying. If the reader transaction tries the same query within the same transaction again, the result will be different compared to the first time.
  3. Phantom Reads: Occur when we are reading a set of records with a specific WHERE clause and another operation inserts new records matching our WHERE clause. Our transaction has read the records without including the record being inserted.


What is database locking?

When we say we have a database lock over a record, a set of records, a database page, a database table, a table-space, etc. we refer to the fact that the database prevents any changes on the locked values. When a set of records are locked any transaction trying to change those data will be queued until the lock is released.

The number of acquired locks and proximity of data being locked determine whether the lock should scale up and go to an upper level, for example from record level to page level, or it should shrink from a table level lock to a page level lock. When the lock goes up, it prevents changes to a larger number of records; for example, when a row level lock, which locks a few hundreds of records, scales to table level lock, which can lock millions of records.

A database manages these locks in the most resource efficient way. For example when we have 100 locks on a table, the database may up the lock and lock the entire table for new transactions instead of keeping  hundreds of separate locks.


What is optimistic concurrency?

As its name implies, optimistic concurrency means assuming that no concurrent transactions happen to affect each other’s data and therefore we not locking the records in database level. You may ask, what happens if before we commit our update another user updates the data we were working on? What will happen to the changes the other user made to those data?

The answer lies in different mechanisms used for detecting changes in the original data we read. Comparing the records or using version field, as JPA does, are two common methods to detect changes in original data and notify the user to decide whether he wants to overwrite the other user's changes or he wants to reconsider his updates.

In optimistic locking, it is the developer's duty to check for changes in the data before updating it.


What is pessimistic concurrency?

In contrast with optimistic locking, pessimistic locking assumes that we certainly have some colliding transaction, and locking the records is necessary to prevent any other transaction accessing the data until the undergoing transaction finishes.


Why should I consider optimistic versus pessimistic approaches to database?

The following facts can affect our decision on using optimistic and pessimistic locks:

When we know that the lock period is short it is best to use pessimistic locking, while if the lock time may be long, we should consider using optimistic locking in order to improve the overall performance.

When we have too many transactions which might collide it is better to use pessimistic locking as the roll-back cost can exceed the locking cost.


How are transactions managed in JDBC?

When using JDBC, we have two options: we can either let the JDBC driver implementation handle the transaction automatically by committing the transaction right after execution of each statement, or we can handle the transaction manually by starting the transaction and then committing it when we see fit. Note that autocommit is on by default according to the JDBC specification.

The following sample code shows how to manually start and commit a transaction:

Statement stmt = con.createStatement();
stmt.executeUpdate("insert into student(field1) values (110)");
// doing some other operations like sending email, performing other db operations //and finally committing the transaction.
con.commit();//committing the transaction and persisting the insert operation.


For the automatic transaction management we do not need to do anything because after executing each operation the transaction will commit automatically.

Note that when dealing with batch operations all of the batch members will form one single transaction and either all batch members will affect the database or none of them.


What are the standard isolation levels defined by JDBC?

There are four isolation levels supported by JDBC driver the underlying database may or may not support them. The list below shows these levels from the least restrictive to the most restrictive one.

  • TRANSACTION_NONE: A constant indicating that transactions are not supported.
  • TRANSACTION_READ_COMMITTED:  A constant indicating that dirty reads are prevented; non-repeatable reads and phantom reads can occur.
  • TRANSACTION_READ_UNCOMMITTED:  A constant indicating that dirty reads, non-repeatable reads and phantom reads can occur.
  • TRANSACTION_REPEATABLE_READ: A constant indicating that dirty reads and non-repeatable reads are prevented; phantom reads can occur.
  • TRANSACTION_SERIALIZABLE: A constant indicating that dirty reads, non-repeatable reads and phantom reads are prevented.

All of these constants are of type int and exist in JDBC Connection interface.

We should set the isolation level in our JDBC connection object prior to using it. Some application servers allows us to specify the default isolation level for a connection acquired from a pool.  Different database management systems support one or more of these levels, we should check whether one is supported or not prior to setting the isolation level to it.

Connection con = DriverManager.getConnection(url);
DatabaseMetaData dbmd = con.getMetaData();
if (dbmd.supportsTransactionIsolationLevel(TRANSACTION_SERIALIZABLE)
{ con.setTransactionIsolation(TRANSACTION_SERIALIZABLE); }
// doing the transactional tasks


The following table shows which isolation level endures which one of the phenomena.


What are Savepoints?

As you know transactions can be rolled back, meaning that we can restore the database state back to where it was prior to starting the transaction. Sometimes the operations that our transaction is performing are so expensive and extended that we prefer not to rollback the entire transaction, and instead we prefer to commit the transaction down to some specific point.

Savepoints allows us to mark the transaction execution and later on rollback the transaction to that specific marker if required.

The following sample code demonstrates using Savepoints:

Statement stmt = con.createStatement();
stmt.executeUpdate("insert into student(field1) values (110)");
//set savepoint
Savepoint svpt1 = conn.setSavepoint("SAVEPOINT_1");
rows = stmt.executeUpdate("insert into student(field2) values ('value')");
//rolling back the transaction up to the insert operation
//committing the transaction and persisting the insert operation.


Note that some databases do not supporte nested savepoints; meaning that you can have only one savepoint per transaction. Check your DBMS documentation for any savepoint restrictions.


What are the considerations for deciding on transaction boundaries?

Designing and deciding the transaction processing boundaries is very domain specific but there are some points which we should always consider:

  • Use manual transaction processing when possible.
  • Bundle operations, as much as the business domain allows, together to increase the overall performance.
  • Use the isolation levels carefully; a more restrictive isolation level means more transactions being blocked for the undergoing one. Decide on which isolation level is really required by consulting the business analyst documents.
  • Consult database documentation and the JDBC driver to understand which isolation levels are supported and what is the default isolation level.
  • Fine tune the database lock escalation attributes according to the system characteristics.




Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}