Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

3 In-Memory Cache Challenges Solved

DZone's Guide to

3 In-Memory Cache Challenges Solved

Learn how to make transactions ACID, how to process datasets that are larger than available RAM, and how to properly persist data.

· Database Zone ·
Free Resource

Discover Tarantool's unique features which include powerful stored procedures, SQL support, smart cache, and the speed of 1 million ACID transactions on a single CPU core!

In-memory cache databases have come a long way since Memcached — adding advanced data types, secondary indexes, stored procedures, and more. One cache solution that includes these and other state-of-the-art features is Tarantool, a database developed in-house at mail.ru and then open-sourced. This article will specifically explore Tarantool's solutions to three significant challenges for in-memory cache databases:

  • How to make transactions ACID.
  • How to process datasets that are larger than available RAM.
  • How to properly persist data.

ACID Transactions

ACID transactions in a cache enable you to offload OLTP work to it from a paired relational (or other) database. The cache performs the transactions and syncs them to the paired database. This allows you to replace any highly available relational machines you may have been using up front in your application to handle writes. And, of course, because transactions in RAM are much faster than on disk, this will accelerate your application.

In Tarantool, transactions are a key part of its architecture, which is fundamentally based on the actor model of computation (Carl Hewitt, 1973). Basically, Tarantool uses three operating system threads: one for network I/O, one for transactions, and one for the write-ahead log (WAL). Within each thread, there are "fibers" (i.e. "actors"), which are non-blocking and communicate by messaging. As far as the sequence goes, data comes in from the network thread and is processed in the transactional thread. The transactional thread sends a message to the WAL thread — which then returns a “commit” or a “rollback” result. Only upon passing through the WAL is a transaction committed.

But what can ACID OLTP in a cache be used for? At mail.ru, Tarantool's heaviest use is in relation to authentication, push notifications, and advertising. Here are some hard numbers:

  • Authentication: Every page request, AJAX request an API call in mail.ru's mobile applications uses Tarantool; average loads are 50K queries per second for login/password authentication and one million queries per second for session/token authentication; all of this work is done by 12 Tarantool servers (four with sessions, eight with user profiles).

  • Push notifications: These are for mobile devices and are sent by Tarantool to a queue, then eventually to iOS and Android APIs; push notifications at mail.ru consist of 200K queries and transactions per second.

  • Advertising: This is the heaviest workload of all for OLTP at mail.ru; in fact, it uses the largest Tarantool cluster in the world. A cool three million read transactions and one million write transactions per second are handled by the cluster.

Oversized Datasets

A limitation of most cache databases is their inability to work with datasets that don't fit into available RAM. Tarantool has overcome this limitation with the release in version 1.7 of its Vinyl engine, which accommodates data sets up to 100 times the size of available RAM.

Vinyl accomplishes this by working with disk in addition to RAM. It is heavily write-optimized, uses LSM tree, and appends to save seek times. Garbage is periodically collected at checkpoints. Vinyl was designed to overcome the traditional limitations of LSM structures, which can result in subpar reads and unpredictable writes. It primarily achieves this through range partitioning, whereby a single index is range-partitioned among many LSM data structures and each data structure has its own in-memory buffer of an adjustable size. This allows merges between LSM levels to possess a greater degree of granularity, and hot ranges are prioritized over cold ranges in terms of resource access.

Fine-Tuned Persistence

Snapshotting (dumping) and logging are the fundamental elements of in-memory database persistence: the former backs up all data at a given point in time and the latter backs up transactions as they occur by appending to a file. Tarantool combines the two approaches by effectively logging back to the latest snapshot. The size of each of Tarantool’s log files (the WAL) can be set, and as mentioned above, requests (insert, update, delete, replace, and upsert) are processed atomically, i.e. a change is either accepted and recorded in the WAL or discarded completely. It should also be noted that snapshotting is naturally a much heavier process than logging, so by combining the two as Tarantool does — disk strain is minimized.

Although it is clear that Tarantool has already successfully tackled several thorny issues related to in-memory caching, it is continuously being improved by a dedicated engineering team — as well as many open-source contributors. Should you be interested in learning more about Tarantool, please contact us at tarantool.io.

Discover Tarantool's unique features such as powerful stored procedures, SQL support, smart cache, and the speed of 1 million ACID transactions on a single CPU.

Topics:
tarantool ,in-memory computing ,database ,acid ,persistence

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}