Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

NoSQL, RDBMS, and Contention for Shared Data (Part 1)

DZone's Guide to

NoSQL, RDBMS, and Contention for Shared Data (Part 1)

Scalability is a key concern when it comes to distributed apps. In this series, we look at how to handle contention for shared data in such a system as a system grows.

· Database Zone ·
Free Resource

Built by the engineers behind Netezza and the technology behind Amazon Redshift, AnzoGraph is a native, Massively Parallel Processing (MPP) distributed Graph OLAP (GOLAP) database that executes queries more than 100x faster than other vendors.  

First off, I’m not here to bash NoSQL. NoSQL came into existence for a good reason: your generic legacy SQL RDBMS is often slow and expensive when compared to NoSQL alternatives. But much of the discussion I’ve seen about NoSQL, NewSQL, legacy RDBMS, and other technologies gets hung up on such issues as the use of SQL instead of focusing on what I would argue is the single most important issue we face — the fundamental barrier to scalability — is contention for shared data. Once you accept this, your architectural choices become much clearer.

What Does Contention for Shared Data Mean?

With the emergence of the IoT, we have reached the point where the majority of database interactions are now with other computer programs, not humans. This has significant implications:

  • Latency expectations will become much more demanding, as not only does software hate waiting more than a couple of milliseconds — it becomes more complex when it has to, too.
  • Volumes will increase by 10X-100X as millions of tiny devices communicate with each other in real-time and every other aspect of our lives is managed and optimized on a millisecond by millisecond basis.
  • Accuracy will become ever more important because we are now making decisions that impact real-world physical resources.

Making instant, accurate decisions on an individual basis at mass scale is technically demanding. Access to the data associated with an individual or their IoT device faces the same challenges as a traditional OLTP system — but with roughly 100X the volume. When you consider that some of this data will be subject to very frequent changes (such as using credit), it becomes apparent that you need an appropriate architecture. There are a limited number of ways you solve this problem…

  • Use a session to make a series of discrete changes to the data and then either commit or rollback.
  • Move the data to the client, which changes it and sends it back.
  • Move the problem to the data, do what needs to be done, send an answer back.

For trivial examples, they all work about equally well. But as the application complexity increases problems appear.

Method #1: Using a Session to Do a Series of Reads and Changes That Are Then Committed or Rolled Back

This is the basis of traditional JDBC-style interaction. A key premise of this approach is that until you commit or rollback, no other session can see what you have done. Another aspect is that each SQL request generally requires its own network round trip. So, from a client perspective, performance degrades as complexity increases, even if each of the SQL statements is trivial.

From the server side, things get unpleasant — fast. For early single CPU servers, keeping track of what each of the fifty or so sessions could see and do was relatively easy because, in practice, only one session could be on the CPU at once. Multi-core CPUs are a double-edged sword for a legacy RDBMS. They may provide extra CPU power, but when handling a session a given CPU core, they now have to anticipate that one of the 500 or so other sessions might be running on another core at the same instant and is about to do or see something that it shouldn’t.

As a result, locking and latching now take up roughly 30% of all the available CPU time. When you start trying to do this with a cluster of several machines, the overheads involved skyrocket, as instead of inter-process communication and shared latches, we’re now dealing with network trips to another cluster member to request the use of a resource.

This is the reason why horizontal scalability often doesn’t work for a legacy RDBMS if you have a write-centric workload. Note that the issue is not the use of SQL, but that at any moment in time, each session has its own collection of uncommitted changes that collectively create a massive overhead.

Method #2: Use a NoSQL Approach When You Move the Data to the Client, Who Changes It and Sends It Back

From the very start, KV stores were written with clusters in mind, and as a consequence, they nearly always use some form of partitioning algorithm to map the key to one or more physical servers. This bypasses the clustering overheads seen in a legacy RDBMS. Neither do KV stores generally provide read consistency or locking, which means that another large chunk of CPU time previously devoted to latching and locking goes away. Instead, you provide a key and retrieve a value, which could be anything at all. Developers love KV stores because they get to decide what the value is without having to define a schema. But while KV stores are very efficient for simple transactions from a raw CPU usage perspective, they struggle as application complexity and speed increase.

Problem #1: There Is No Mechanism to Control Access to Data

At any moment in time, there could be a number of updated values for a given key on their way back to finish an update. For many applications, this has a nugatory impact, but if something of value is being used up, spent, or allocated, we run the risk of either having an update be instantly overwritten — or, in the “best case scenario,” be rejected. In real life, we will usually see a small number of records that everyone wants to update, and for a KV store, this can present a real challenge, as it may take an arbitrary number of attempts to make an update work if a large number of clients are trying to access it at once. The bottom line is that this approach has the same scalability issues as session-based interaction, but they manifest themselves as aborted transactions, network bandwidth consumption, and erratic latency instead of server-side locking errors and CPU usage.

Problem #2: Emergent Complexity

What happens when you need more than one key-value pair to implement your use case? Suddenly, life becomes very hard for the client, as it not only has to organize multiple atomic updates of KV pairs but also has to have contingency plans for if some (but not all) fail for some reason. Add in the "secret sauce" of eventual consistency and life becomes really hard.

The bottom line: KV stores will work well for small applications, but as complexity increases, they end up as bad as the traditional legacy RDBMS approach — but with the pain manifesting itself as complexity and inaccuracy instead of poor CPU performance and high license fees.

The Third Method

Stay tuned for the third method, learn why it’s better, and see what the implications are for modern database and application architecture coming in a post later. Let us know what you think of Part 1 in the comments below.

Download AnzoGraph now and find out for yourself why it is acknowledged as the most complete all-in-one data warehouse for BI style and graph analytics.  

Topics:
database ,contention ,shared data ,nosql ,rdbms ,scalability ,data access ,complexity ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}