Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Guts 'n' Glory of Database Internals: Log Shipping and Point in Time Recovery

DZone's Guide to

The Guts 'n' Glory of Database Internals: Log Shipping and Point in Time Recovery

Journals and logs are central to any database recovery scheme. Read on to find out more on how these work.

· Database Zone
Free Resource

Whether you work in SQL Server Management Studio or Visual Studio, Redgate tools integrate with your existing infrastructure, enabling you to align DevOps for your applications with DevOps for your SQL Server databases. Discover true Database DevOps, brought to you in partnership with Redgate.

image[3]

In my previous post, I talked about the journal and how it is used to recover the database state in case of errors.

In other words, we write everything that we are going to do into the journal, and the code is written to run through the journal and apply these changes to the database.

Ponder this for a moment and consider the implications of applying the same concept elsewhere. Log shipping is basically taking the journal file from one server and asking another server to apply it as well. That is all.

Everything else is mere details that aren't really interesting. Well, it is, but first, you need to grasp what is so special in log shipping. If you already have a journal and a recovery from it, you are probably at least 80% or so of the way there to getting log shipping to work.

Now that you understand what log shipping is, let's talk about its implications. Depending on the way your log is structured, you might be able to accept logs from multiple servers and apply all of them (accepting the fact that they might conflict), but in practice, this is almost never done in this manner. Log shipping typically mandates that one server would be designated the master (statically or dynamically), and the other(s) are designated as secondary (typically read-only copies).

The process in which the logs are shipped is interesting. You can do that on a time basis (every 5-15 minutes), every time that you close a journal file (64MB-256MB), etc. This is typically called offline log shipping, because there is a large gap between the master and the secondary. There is also online log shipping, in which every write to the journal is also a socket write to the other server, which accepts the new journal entries, write them to its own journal and applies them immediately, resulting in a much narrower delay between the systems. Note that this has its own issues because this is now a distributed system with all that it implies (if the secondary isn't available for an hour, what does it mean, etc.).

But journals also allow us to do other fun stuff. In particular, if the journal file records the timestamp of transactions (and most do), they allow us to do what is called "point in time" recovery. Starting from a particular backup, apply all the committed transactions until a certain point in time, bring up to the state of the database at 9:07 a.m. (one minute before someone run an UPDATE statement without the WHERE clause).

Note that all of the code that actually implements this has already been written as part of ensuring that the database can recover from errors, and all we need to implement "point in time" recovery is to just stop at a particular point in time. That is a pretty neat thing, in my opinion.

It’s easier than you think to extend DevOps practices to SQL Server with Redgate tools. Discover how to introduce true Database DevOps, brought to you in partnership with Redgate

Topics:
shipping ,log ,database ,recovery ,internals

Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}