Using New Relic to Understand Redis Performance: The 7 Key Metrics
Using New Relic to Understand Redis Performance: The 7 Key Metrics
Join the DZone community and get the full member experience.Join For Free
Guest author Itamar Haber is chief developer advocate at Redis Labs.
Redis is designed to be a blazing fast in-memory database, but what does “fast” exactly mean? How do you know whether your Redis database is fast enough for your purposes, and what telltale signs should be you looking for? Regardless of whether you’re using a hosted Redis (such as Redis Cloud) or operating your own Redis server, the performance of your database is key to understand.
Redis’ performance is quantifiable and measurable by tracking several key metrics, and by monitoring these you can gain insight into how well your database is performing. The performance of your application is directly affected by that of the database that powers it, so obtaining a holistic view of its entire stack is imperative for ensuring that you continue delivering value to your users. Monitoring code-level metrics to distill actionable information is one of the New Relic Platform’s core strengths, so it’s useful to understand what you can learn from these crucial metrics and how to access them viaRedis Labs’ free Redis Cloud plugin for New Relic. (Although we’re talking specifically about Redis here, it’s important to note that many of these points and tips would hold true for any in-memory store.)
How fast is Redis? Throughput and latency
Redis’ performance is made up of two key factors: throughput and latency. Throughput is the number of operations processed by the database within a given period of time. Usually, as your application’s activity increases, so does the number of operations that it performs against the database, consequently increasing the throughput that the database has to sustain.
Each application’s throughput requirements are unique and are derived from its business logic and the activity patterns of its users. While some benchmarks demonstrate Redis’ support for hundreds of thousands of operations per seconds (OPS, or op/s), it’s preferable to rely on baselines based on your application’s singular needs. To establish such a baseline, monitor the throughput over a period of time and analyze it to identify patterns (most importantly peaks) and trends.
The following example (taken from our New Relic Redis Cloud plugin’s Performance Metrics dashboard) demonstrates moderate throughput of a healthy application:
A single Redis server’s throughput is finite. Depending on a myriad of factors (including your host’s resources, the data you have stored, and the operations you perform against it), this limit may be absurdly high or frighteningly low. Monitoring the server’s resources, e.g., CPU, storage, and network, is always a good idea (unless your Redis provider takes care of these for you), but one immediate sign that you’ve reached your Redis’ throughput limit is an increase in average latency.
Latency (also available in the dashboard shown above) is the time it takes for an operation to complete. In a distributed system there are many latencies to consider, each reflecting a perspective of the different components in play. Redis’ reported latency is the time for an operation’s completion starting with the server getting the request and running until it sends back the reply. Like throughput, Redis’ latency is affected by the compute resources, data, and operations that you employ, so “real” numbers are best obtained from your own monitoring tools.
“How fast is Redis?” remains a tricky question since the answer really depends on what you’re doing with it. Perhaps a better question would be “how fast do you need Redis to be?” As long as your Redis latency doesn’t break the budget during peak hours, you’re probably still in the safe zone, but there’s always room for improvement.
Here are five more performance-related metrics that you should track:
5 more important Redis metrics
Since Redis is an in-memory database, RAM is perhaps its most limited resource. Once Redis exhausts the RAM it is configured to use for storing data, it will employ its configured eviction policy (more on that below) resulting in either out-of-memory errors or data eviction. The plugin’s Usage Metrics dashboard provides you with that information, as shown below. You can find more help with on this Redis memory-optimization page:
Every operation is sent to Redis within the context of a connection from the client application. The maximum number of concurrent connections to your Redis server is always limited—whether by your operating system, Redis’ configuration, or your service provider’s plan—and you should always ensure you have enough free resources to allow the connection of new application clients or an administrative session. For more information, see this Redis latency problems troubleshooting page.
3. Cache Hit Ratio
Redis can be used as an intelligent cache by configuring it with the proper eviction policy. When employed that way, a cache is most effective when the application accesses contents that are actually stored in it. By tracking the Cache Hit Ratio, the percentage of successful reads out of all read operations, you can monitor its effectiveness. This and the associated cache metrics are available from the Cache Metrics dashboard (below), or see this page on Using Redis as an LRU cache:
If your cache’s hit ratio drops too low, that may mean that eviction has been triggered causing the removal of data from Redis and resulting in cache misses. To verify that, check your memory usage readings vis–à–vis evictions.
5. Expired objects
Since eviction is triggered when Redis’ memory is running low, perhaps you’re not expiring cache objects. While Redis supports expiry, it is up to the application to set each object’s time-to-live and this chore is too often neglected. By going over the expired object’s readings, you can find whether expiry occurs or if data keeps piling up in your cache.
According to a recent survey, database performance is the No. 1 challenge for developers. Fortunately, making Redis a part of your stack can go a long way toward solving that problem. Still, ensuring that your Redis is top-performing is an ongoing task that requires methodical monitoring and using the proper tools to resolve alerts when they come.
The best way to collect and analyze this information is to use the New Relic Platform. By optimizing your own server or using advanced hosted services, you can continue delivering the best experience to your users.
Want to learn more? Redis Labs will be hosting a webinar on Wednesday, May 13th, 1 p.m. ET (10 a.m. PT) to show you exactly how to enhance Redis performance to 1.5 million ops / sec at <1 msec latency.
Published at DZone with permission of Fredric Paul , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.