Three Nodes, Highly Available Cluster with VoltDB
Three Nodes, Highly Available Cluster with VoltDB
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
Curator's Note: The content of this article was originally written by Philip Rosegay over at the VoltDB blog.
You’ve probably heard about the VoltDB the super fast distributed ACID SQL RDBMS for OLTP, but you might not be aware of its throughput capabilities in detail. VoltDB achieves its high throughput by eliminating the locking and latching of conventional databases. It’s also distributed and can automatically shard your data, and it has a bunch of other really cool features that make transaction processing a snap.
We took a key-value application and implemented it in VoltDB (the app is available with the distribution if you want to try it). We set the app up on 4 Dell R510′s with 64GB and dual 2.93GHz intel 6 core processors. We used 3 machines for the database, giving us a total of 192GB. We used the fourth machine for the client application. All the machines are interconnected by 10Gb Ethernet. We configured volt into a K=1 cluster–voltspeak for a database cluster that can withstand one node failing without loss of availability meaning loss of data or a significant degradation of performance. Furthermore, we told VoltDB to create 10 partitions on each node; these represent independent processing threads on each node. Volt uses this information to shard the data to enable parallel processing of queries. (If you try this at home you’ll want to experiment a bit to find the best partition count for your hardware, but start around 75% of the physical core count. Also, the partition count value times the number of nodes in your cluster must be a multiple of the K factor plus one). Our sample application has one table with two columns a key, a 32 character string, and a value, a 1K character string. In this exercise the app was configured to perform 90% reads, 10% writes, and all operations are singlepartition–voltdb jargon indicating that the datum required for each query is present is a single partition (our application generates keys and values for the queries at random, so the resulting data is uniformly distributed).
Now, with the configuration completed, we set out to explore throughput, in particular, we wanted to understand how VoltDB would perform at various load levels in terms of transactions per second and latency. In this app a transaction is either a read or a write of the value column for a given key. Ideally, we want our production application to operate at maximum throughput and minimum latency, but varying real world workload requirements don’t always permit that, in any case, we want to understand how to measure these characteristics. To that end, VoltDB’s Java client has a lot of neat features, such as rate limiting and statistics collection, detailed latency reporting, and connection status reporting. We used the rate limiter to profile VoltDB’s throughput and latency at various points and to plot a curve of its performance. We started bysetting the rate limiter well above TPS each client instance was handling effectively firehosing the database with more work than it can handle. Also, we kept adding client instances until we didn’t see any further increase in throughput. At that point we were at just under 1 million transactions per second. Then we applied rate limiting backing off about 10% on each pass. We recorded throughput and latency as reported by the client for each pass. We went back and made additional measurements in areas where the latency is changing–in our case this is from about 950KTPS up to max to get a more accurate picture of the performance curve. We finished off the profile under 700K TPS by halving the ratelimit value on each pass.
The resulting chart plots transaction rate (TPS) vs latency (ms). As you can see from the chart, latency, as measured from the client application, is essentially flat up to almost 800K TPS, after which it starts increasing. When the servers are maxed out, the client will queue (up to a point, and this is configurable), or it will block. The increase in latency at high transaction rates is evidence of this behavior in the client. It’s important to note that even though we hit the wall at around 950K TPS with this configuration, with VoltDB’s architecture, its possible to add capacity by increasing the size of our cluster of servers.
The performance I measured on this generic workload was impressive. I quickly assembled an extremely powerful highly available and fully durable transaction processing database using commodity components in a matter of a few hours. VoltDB is highly configurable, you can write your own embedded Java procedures that leverage the SQL backend. This combination readily enables building challenging scalable applications such as real time analytics, session management, etc. If you’ve got an application that is having scaling challenges please reach out to us and lets discuss how VoltDB can help.
Opinions expressed by DZone contributors are their own.