Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Optimistic Locking

DZone's Guide to

Optimistic Locking

· Java Zone
Free Resource

Build vs Buy a Data Quality Solution: Which is Best for You? Gain insights on a hybrid approach. Download white paper now!

Introduction


We all know about Isolation level of MS SQL Server. But there are certain confusion related to Optimistic Locking technique. This article is for the advance SQL Developer to learn and understand Optimistic Locking system. If you are new in Isolation level please read my previous article named Isolation Level of MS SQL Server. Hope it will be knowledgeable and helpful. 

How to Find Current Isolation Level

To find the current Isolation level at your database use this SQL Statement:

 SELECT CASE transaction_isolation_level
                  WHEN 0 THEN 'Unspecified'
                  WHEN 1 THEN 'ReadUncommitted'
                  WHEN 2 THEN 'ReadCommitted'
                  WHEN 3 THEN 'Repeatable'
                  WHEN 4 THEN 'Serializable'
                  WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL
FROM  sys.dm_exec_sessions
WHERE session_id = @@SPID;

Problem with Locking

As we all know that to read data from database (SELECT…) the Shared Lock is established and when we perform the DML operation or Writing something in Database we used Exclusive Lock. Both Shared and Exclusive lock are not compatible to each other that means When we Write something we cannot Read or When we Read we cannot Write.

Off course we can do the Dirty Read by Isolation Level READ UNCOMMITED or WITH(NOLOCK) operation hint. But it is Not at all recommended and all the developers know why?

So What the Solutions

From MS SQL Server 2005 Microsoft provide us two type of Optimistic Lock to get this solution.

1.  SNAPSHOT

2.  READ COMMITED SNAPSHOT

Both the isolation level uses the row version technique and use the TempDB Databse.

The benefits that we get from this Isolation level is Read do not Block Write and Write do not Block Read in case of OLTP environment.

Implementing Optimistic Locking

To implement optimistic locking we have to plan it well others suffer big performance problems, loss of availability, and incorrect query results.

So we discuss it by our old fashion called step by step process

Seep-1 [ Our Production Environment can Handel this Load ]                                           

When we change the Isolation level to SNAPSHOT, MS SQL Server waits for running Transaction to complete and starts version controlling for data modification. It starts using extra 14 bytes per row in a table for that.

With this version are created in tempDB databse to hold the previous version, so we need a big size tempDB database to maintain the version control by Snapshot Isolation level.

Another aspect is Log Version chain making query so slow.

We have to monitor our system for that. We can use SQL server performance counter for that.

Step-2 [ We have to Choose ]

We have to choose carefully between Snapshot and Read Committed Snapshot.

To implement the SNAPSHOT isolation level, we have to Implemented in Database first like this.

 ALTER DATABASE IsolationTests 
SET ALLOW_SNAPSHOT_ISOLATION ON

Then we have to use the Snapshot isolation level by Transaction wise like this:

 SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
.....
.....

So we can use the Snapshot Isolation with other Isolation level and were it needed we can use it by putting a single line SQL statement SET TRANSACTION ISOLATION LEVEL SNAPSHOT. In other wards we can say that Snapshot Isolation level can be set by session specific.

On other hand READ COMMITED SNAPSHOT only takes Database level settings like this.

 ALTER DATABASE IsolationTests 
SET READ_COMMITTED_SNAPSHOT ON
No other statement is required and when it execute in database level it going to change the default Isolation level from every query. This may give us incorrect results depend on how we write our existing query.

Hope you like it.

Build vs Buy a Data Quality Solution: Which is Best for You? Maintaining high quality data is essential for operational efficiency, meaningful analytics and good long-term customer relationships. But, when dealing with multiple sources of data, data quality becomes complex, so you need to know when you should build a custom data quality tools effort over canned solutions. Download our whitepaper for more insights into a hybrid approach.

Topics:

Published at DZone with permission of Joydeep Das, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}