Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Guts 'n' Glory of Database Internals: Writing to a Data File

DZone's Guide to

The Guts 'n' Glory of Database Internals: Writing to a Data File

Here's a look at database journals and how they can be used, in conjunction with other tools, to keep things moving efficiently.

· Database Zone
Free Resource

Finding a database that fits with a container-based deployment model can be frustrating. Learn what to look for in a Docker database

So, we now know how we can write to a journal file efficiently, but a large part of doing that is relying on the fact that we are never actually going to read from the journal file. In other words, this is like keeping your receipts in case of an audit. You do that because you have to, but you really don't want to ever need it. You just throw it into a drawer and sort it out when you need to.

In most database engines, that implement a journal. There is a distinction: The journal is strict for durability and recovery, and the data file(s) are used to actually store the data in order to operate. In our case, we'll assume a single journal and a single data file.

On every commit, we'll write to the journal file, as previously discussed, and we ensure that the data is safely on the disk. But what happens to writes on the data file?

Simple. Absolutely nothing.

Whenever we need to write to the data file, we make buffered writes into the data file, which goes into the big chain of buffers that merge/reorder and try to optimize our I/O. Which is great, because we don't really need to care about that. We can let the operating system handle all of that.

Up to a point, of course.

Every now and then, we'll issue an fsync on the data file, forcing the file system to actually flush all those buffered writes to disk. We can do this in an asynchronous manner and continue operations on the file. At the time that the fsync is done (which can be a lot of time, if we had a lot of writes and a busy server), we know what is the minimum amount of data that was already written to the data file and persisted on disk. Since we can match it up to the position of the data on the journal, we can safely say that the next time we recover, we can start reading the journal from that location.

If we had additional writes from later in the journal file that ended up physically in the data file, it doesn't matter, because they will be overwritten by the journal entries that we have.

Doing it this way allows us to generate large batches of I/O and, in most cases, allow the operating system the freedom to flush things from the buffers on its own timeline. We just make sure that this doesn't get into degenerate case (where we'll need to read tens of GB of journal files) by forcing this every now and then, so recovery is fast in nearly all cases.

All of this I/O tends to happen in async threads, and typical deployments will have separate volumes for logs and data files, so we can parallelize everything instead of competing with one another.

By the way, I'm running this series based on my own experience in building databases, and I'm trying to simplify it as much as it is possible to simplify such a complex topic. If you have specific questions/topics you'd like me to cover, I'll be happy to take them.

When you're looking for a SQL database that can scale elastically, while still preserving ACID guarantees, you only have a few choices. Find out how these elastic SQL databases perform in thishead-to-head YCSB benchmark.

Topics:
database ,file ,storm ,io ,patterns ,page

Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}