Over a million developers have joined DZone.

Busting 4 Myths of In-Memory Databases

There are a lot of myths about in-memory databases. Here's four myths that we'll bust about the popular database model.

· Database Zone

Sign up for the Couchbase Community Newsletter to stay ahead of the curve on the latest NoSQL news, events, and webinars. Brought to you in partnership with Coucbase.

Real-Time Responses Can Make or Break Your App

The world as we know it is changing at a speed we’ve never seen before. The best application user experience requires blazing fast response times in the range of a tenth of a second. If you factor in 50 milliseconds for an average acceptable Internet round trip response time, that means your application should process data and return responses in the remaining 50 milliseconds. This in turn requires sub-millisecond database response times, particularly if you’re handling thousands of simultaneous requests. Only the most nimble yet versatile databases can satisfy this need for speed.

In big data processing scenarios, where insights must be gleaned and decisions made rapidly, in-memory databases can cut processing times from hours or minutes to seconds without complex optimizations or compromises. But there are a lot of myths about in-memory databases. Most people think they are unreliable, inconsistent and expensive. But most of all they think it is enough to run any kind of database in memory to get the speed they need.

Myth #1: All In-Memory Databases are Equally Fast

This is simply not true. Even though most in-memory databases are written in efficient programming languages such as C or C++, they vary in responsiveness because:

  • The complexity of processing commands is different for different databases. The fastest ones are those in which most commands are executed with minimal complexity (Example: Order O(1) operations as opposed to O(n) where n is the number of items in your dataset/data structure). This particularly matters if your dataset grows over time and you don’t want to continually re-optimize query times.
  • Query efficiency alsovaries. Sometimes the database looks at all data brought into memory as a single BLOB (like memcached does for caching) which is inefficient—databases need to retain the ability to query discrete values, to save memory and network overhead and significantly reduce application processing times.
  • Network pipelining, protocol and connection efficiency vary. Databases need the ability to pipeline commands for lower context switches and to parse requests or serialize responses fast. They need to maintain long-lived connections to the data in order to avoid wasting time in the setup and teardown of tcp connections.
  • Single-threaded vs multi-threaded architecture choices present tradeoffs. Multi-threading utilizes all available computing power with no effort from the operator. But such solutions also require substantial internal management and synchronization, which in turn consume considerable computing resources. The locking overheads associated with a multi-threaded architecture can dramatically reduce your database performance.
  • Single-threaded-ness offers a drastically simpler execution model that is lock-free and therefore comes with a much lower computational cost, but this shifts the responsibility of managing the compute resources to the operator/user. The ideal solution, of course, is one wherein the operator/user doesn’t have to worry about the compute resource management, because database management is provided in a way that is not onerously resource-intensive.

  • Shared nothing vs shared something vs shared everything architecture choices impact scalability. Database sizes increase over time, and performance must scale as you add instances. A shared-nothing model ensures that each entity (or process) operates as an independent unit and there is no communication overhead when the number of processes increases, thus enabling linear scalability.
  • Built–in acceleration components such as zero latency distributed proxies can significantly improve the performance of the database by offloading a lot of network-oriented tasks (such as connection setup/teardown and management of a large number of connections) and by reducing the TCP protocol overhead. In some situations, the proxy can communicate with the database as if it is another process on the machine playing the role of an active local client that works on behalf of many remote clients.
  • If throughput and latency are your primary yardsticks, then you should choose a database that maximizes throughput at a consistent sub-millisecond latency with the minimum number of servers needed.

    The figures below show the performance differences between several in-memory NoSQL databases running in a real world scenario.

    Servers Needed to Deliver 1Million Ops/SecondImage title


    Myth # 2: In-Memory Computing is Unreliable and Inconsistent

    Most NoSQL databases (not just the in-memory ones) provide acknowledgements (ack) to the client before they commit data to the disk or to the replica. Therefore it’s possible that an outage could leave data in an inconsistent state.

    The CAP theorem states that any distributed computer system cannot simultaneously provide consistency, availability and partition tolerance. Different databases fall into different categories, as shown here:


    CAP Quadrants


    Picking a CP model database means developers don’t have to worry about consistency but ‘write’ commands are not allowed during network split events.

    Picking an AP model means that the database is still available for both ‘write’ and ‘read’ commands under network partition, but the developers have to write application code to ensure consistency rather than having the database enforce it.

    Just pick the right database model depending on your use case.

    Myth # 3: In-Memory Computing is Hard to Scale

    There are a couple of ways to think about scale. You can scale up by upgrading the servers that host your database (e.g. adding more RAM or cores). Or you can scale out by adding more servers to your in-memory cluster. With some databases, you can run multiple slices (or shards) of the same dataset on the same node, thereby delaying the need to add more nodes by utilizing all existing cores first. You can also treat memory across multiple servers as a single, shared pool, thereby enabling scaling outside the limitations of single-node memory. Some in-memory databases today allow you to scale your database up/down or out/in, in a seamless manner that maximizes the responsiveness of your applications by dynamically increasing the number of cores and memory nodes allocated for your database operation.

    Myth # 4: In-Memory Computing is Expensive

    Any use case that needs fast throughput raises the question: “Exactly how many resources are needed for a certain level of throughput?” For example, an in-memory database that serves 1.5 million operations/sec with a single Amazon EC2 instance is way cheaper to run than alternative databases that may not run in-memory (and might be cheaper in terms of disk resource per server), but end up using tens of servers to get the same throughput.

    If your dataset runs to several TBs and the cost of memory becomes an issue, there are now technical innovations that make it possible to use flash memory as a RAM-extender, making the cost of such a solution 10 times cheaper than purely in-memory. Keep in mind that Flash as a RAM extension will impact performance to some extent, so the ideal technology here is one that allows the user to define their RAM/Flash ratio to get to the desired price/performance. RLEC from Redis Labs provides this type of configurability.

    To Summarize

    In-memory computing options have evolved over the last few years to be reliable, consistent, highly scalable and inexpensive. Sub-tenths-of-a-second latency for databases and minutes for analytic processing times are becoming the de facto industry standard. To keep up with your competition and to deliver the best possible application user experience, data architects must incorporate in-memory computing into their application stack.

    Note: Download the independent benchmarks mentioned in the article here.

    The Getting Started with NoSQL Guide will get you hands-on with NoSQL in minutes with no coding needed. Brought to you in partnership with Couchbase.

    Topics:
    nosql ,in memory computing ,databases ,redis ,redis cluster ,mongodb ,performance

    Opinions expressed by DZone contributors are their own.

    The best of DZone straight to your inbox.

    SEE AN EXAMPLE
    Please provide a valid email address.

    Thanks for subscribing!

    Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
    Subscribe

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}