Over a million developers have joined DZone.

Chronicle Journal - Customizable Data Store

It's your data, and you need it now. Learn how to increase performance, and improve the speed of any data structure or algorithm.

· Performance Zone

Discover 50 of the latest mobile performance statistics with the Ultimate Guide to Digital Experience Monitoring, brought to you in partnership with Catchpoint.

Overview

The way to make any data structure or algorithm as fast as possible is for the code to do exactly what you want and no more.  The problem with building a data store which does everything anyone could want is that it won't do anything particularly well.

What Can You Achieve With a Custom Data Store in Terms of Performance?

You can support:

  • Read/write latencies of around 75 nano-seconds.
  • Throughputs of 40 million operations per second.
  • With binary encoding and compression reduce the size of your data by a factor of 100 or more. This saves memory and increases scalability.
  • Control how replication utilises your network, or is synchronized with your database.
  • Chronicle Map and Queue are very efficient, however if they don't do exactly what you need or they do more than you need this sort of collection has the potential to be much faster.
  • Optimise the hash function for a known, fairly stable key set to minimise collision rate.

Do We Really Need a Customizable Data Store?

Most developers are not too concerned about the efficiency their data store is and generic data stores work well enough and hide the details of how they really work.  This can save developers a lot of time worrying about the details of how a data store functions.

There are times when the choice of data store and how it operates really does matter. If a data store is heavily used, how the data is arranged, the functionality it provides, and just as importantly, what it doesn't provide really matters.  You don't want to be paying the overhead of supporting functionality you are not using.

Why Do Reactive Systems Have Greater Demands?

Reactive systems have greater demands on timeliness, needed to see events/updates within milli-seconds or even micro-seconds of them being committed.

Reactive systems are more likely to care about how the data got to it's final state.  Unlike polling systems where you are more likely to see just the final result of multiple changes, a reactive system might need to see exactly what changes were made and in what order.

Low Latency, High Throughput

A simple thread safe, segmented key-value store can have latencies around 75 nano-seconds, and support 40 million accesses (gets or puts) per second. Adding support for more functionality will impact performance so you only want to add the functionality you actually need if performance is also critical.

Even simple things like adding a timestamp which can take 30 nano-second sounds quick but can mean operations take 50% longer.

What Options Would You Like to Be Able to Customize?

Which Level of Ordering or Event Serialization Do You Need?

As you reduce the level of the ordering constraint you find:

  • Total global ordering
  • Data store ordering
  • Segment ordering
  • Key ordering
  • No ordering

The ordering  (i.e. preserving the order in which all readers see events) constraints are closely related to locking or serialization of events. Locking is easier to implement and supports richer functionality, however lock free algorithms can be not only faster be more scalable with more consistent latencies.

In a data store, with total ordering you will see all changes in a consistent order.  While this is the safest option, it places a global serialization requirement on all the data. This dramatically limits the options for concurrent updates. However this does simplify locking as you will have a global lock on all the data.

An alternative to total ordering is to have ordering for a data store.  This means you will know the exact order of all changes to the store, but not record changes between stores. (You can add timestamps to get an ideal of when changes happened)

To allow concurrency within a store, you can use segments or page based ordering.  When you update an entry which is assigned to a segment, that segment is locked but other segments can be updated.  You can get the order of all events within that segment but not between segments.

The greatest concurrency can be achieved by only limiting the order of changes to individual keys. This way any number of keys can be updated concurrently, but at least you know what a key was updated to last.

Finally, you might not need any of this.  This is particularly useful if an entry is never changed, it either exists or it doesn't. You might want to prevent any record be altered. i.e. records can only be added.  If the same record with the same details is added twice this may be acceptable and ignored as a duplicate.

Shared Memory Data Store

A feature we have found particularly useful is being able to share data between JVM on the same machine.  This allows all JVMs to access the data at in memory speeds.

While this feature doesn't slow down the solution it does place some constraints on the design to allow this to work. In particular, Java doesn't support a heap shared between JVMs, to share memory you need to use off heap memory.

If a data structure is not shared across JVMs, you can use on heap locking mechanisms which are slower but provide other functionality such as fair locking.

Replication Models

There are a number of ways to replicate data which will effect the efficiency of your data store:

  • Eventual consistency. We favour this model as it handles split brain situations gracefully.
  • Transactional updates. An event is either visible to all nodes in a cluster or none of them.
  • At least one backup. An update is saved to at least two nodes. If one fails the data is not lost.  This can be faster than ensuring every node has accepted the update.
  • Multi-cluster replication. While data might be freely replicated within a local cluster, you may want controls over which data is replicated to between regions and how this is performed.
  • Traffic shaping you may want to control the rate of updates, or the bandwidth used, and whether compression is used.

Synchronous or Asynchronous Persistence

Our solutions try very hard to be as fast synchronously as most solutions performing updates asynchronously.  This helps reduce overhead and complexity.

Typically a write to a memory mapped file isn't flushed to disk immediately so the choice of disk subsystem doesn't matter provided you haven't overloaded it. In terms of throughput, it's your bandwidth utilisation which matters.  If you use even a fraction of your bandwidth on a sustained basis, you are likely to run out of disk space pretty quickly. If you are writing even a very modest 12 MB/s sustained, that is over 1 TB per day.

The operating systems we have tested don't hide the disk sub-system from you entirely. For one in every ten or one in every hundred writes the latency will depend on the type of disk sub-system you have.  If you care about 99% tile latencies, your choice of disk sub-system still matters.

You would assume that anyone who cares about performance would be using SSD if not PCI-SSD, because they have latencies which are around 100 times faster than spinning disk. The number of IOPS (IOs per second) for enterprise SSD is around 100 times higher as well. Desktop SSD can be 1000 times higher so you can expect this will become the norm for enterprise disk as well.

 Unfortunately, it's not that simple in large organisations and getting in SSD drives could take a very long time e.g. 6 to 12 months, if they can get approval at all.

One work around is to write data asynchronously to memory and have this spooled to disk in another thread.

Should the Data be Stored as Text or Binary?

Binary data is usually more efficient than text, unless the data is already in a text format.  Some gains can be made by transforming highly verbose formats like XML, or JSon into a binary format which is turned back into text when retrieved.  This is a format specific compression which can work well even when compared generic compression (See next)

Converting to a binary format can reduce the size of data by a factor from 3 to 10 times. If the format can be lossy, you can save even more space. (e.g. can white space be dropped) If generic compression is also used you can get compression ratios of 20 to 200 times.

Should the Data be Compressed?

Compressing data is a trade of between CPU and space consumed.  There are a number of compression strategies which either use less CPU but don't compress as well, to strategies which use more CPU and compact data further.

This can not only save disk space but memory consumption as well.  This allows you to scale the amount of data you can store efficiently.

If you have plenty of memory, you might want to avoid compression to save CPU.

If your data entries are large, compressing each individual entry can work well. If your data entries are small, you can get significant gains by compressing blocks of entries.

You might even need a hybrid approach where recent data is not compressed but longer term data is compressed asynchronously.

If you use generic compression, you can get compression ratios of between 5 to 50 times.

In a Reactive System, Can a Consumer Consolidate the Updates it Missed?

If you have a slow consumer in your system, you want a simple way to catch up. You will always have consumers which are momentarily behind, but in some systems, they might be a long way behind. In Chronicle Queue for example, a consumer can be more than main memory behind the producer as it never drops updates.

If you drop updates, you can quickly catch up assuming there is lots of updates for the same key, or there is a simple consolidation strategy.

There are times when you need see every event/message/alteration no matter how old they are. This is useful for auditing purposes.

You might need a hybrid approach where every event is being recorded but some consumers can skip to the latest update for a key.

Batching Data

In a transactional data with a high per transaction overhead, using batching can really help.  Batching is also useful for IO operations again to reduce overhead.

Most of our solutions try to have a very low overhead per transaction to minimise latency so the addition of batch can introduce more overhead than it saves.  

More Robust Security Models

You may need to be able to control access to individual sets, but you might also need to add Access Control Lists to every individual key.

You might need access to entries to be based on the contents of those entries. For example employees in New York might be able to update entries with a location=New York. Employees in a region, organisation or team can manage their own data.

Time Stamping Changes

Should updates/event be timestamped?  This can be useful, but a non-trivial overhead if not used.

Auditing Information and Simplified Security

When a change is made you may need to record additional informations such as; who made the change, when, or from which client.  This is useful for auditing purposes and simplifying your security model.

Using auditing can encourage users to be more considered about the changes they make.  They can still perform actions which you might want to perform rarely, but the users know you can track exactly who did what and when.

If you also have the ability to reverse/correct changes made, this can be another way to deal with mistakes.

What About Remote Access, Object Serialization, and Monitoring?

Chronicle Journal is built on Chronicle Engine and offers all the same functionality that Engine does.  Journal provides a customised data store for Engine to distribute.

Is Chronicle Journal Open Source?

We have two open source data storage solutions, Chronicle Queue and Chronicle Map which work for specific use cases very well and you may wish to try these first to see if they do what you need.

Chronicle Journal is designed to be more customizable which in turn needs more consulting to realise the solution. As such it is on GitHub but only accessible to clients with a support agreement.

Is your APM strategy broken? This ebook explores the latest in Gartner research to help you learn how to close the end-user experience gap in APM, brought to you in partnership with Catchpoint.

Topics:
performance ,data ,data store ,database

Published at DZone with permission of Peter Lawrey, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}