Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How to train your Redis for production workloads

DZone's Guide to

How to train your Redis for production workloads

Read this tutorial in order to learn more about how to configure Redis for production workloads. Also, learn about the maxclients limit.

· Database Zone ·
Free Resource

Databases are better when they can run themselves. CockroachDB is a SQL database that automates scaling and recovery. Check it out here.

Highly available databases are important for the high availability of your mission-critical applications. Applications today process the massive amount of data and require blazingly fast response times. MongoDB and Redis are usually deployed together to serve these needs. When configuring these databases for the production workloads, the warnings in the logs should be attended seriously without fail. These proactive steps can save you from the catastrophic failures and get paged overnight. Following are a handful of configurations you need to take into an account and add them to your production Redis checklist.

Maxclients Limit

The maxclients limit is the number of clients Redis is able to handle under the current operating system limit. By default, this limit is set to 10000 clients in redis.conf. However, if the Redis server is restricted by the number of open files, the maxclients is set to an available number of open files minus 32 (4064 usually). Once the limit is reached, Redis will close all the new connections, sending an error:

max number of clients reached

Solution

For real-time applications where the number of clients is expected to be high, you should revisit this limit and modify it based on your use case. Modify /etc/security/limits.conf and modify limit:

redis        - nofile   12000   

Alternatively, this limit can be set using redis-cli:

redis-cli 
127.0.0.1:6379> CONFIG SET maxclients 12000
127.0.0.1:6379> CONFIG GET maxclients
1) "maxclients"
2) "12000"

Overcommit Memory

Redis forks a child process to dump the DB on disk. If the overcommit_memory is set to zero, the fork will fail unless there is as much free memory as required to really duplicate all the parent memory pages with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory, it will fail. The error logged looks like:

[4916] 28 Jan 19:24:49.084 * 1 changes in 900 seconds. Saving...
[4916] 28 Jan 19:24:49.084  Can't save in background: fork: Cannot allocate memory 

Solution

To fix this, set overcommit_memory in /etc/sysctl.conf

sysctl vm.overcommit_memory=1

Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis

Transparent Huge Pages(THP)

Transparent Huge Pages (THP) support enabled by default in the kernel. This will create latency and memory usage issues with Redis.

Solution

Make sure to disable Linux kernel feature transparent huge pages, it will affect greatly both memory usage and latency in a negative way. This is accomplished with the following command as root and add it to your/etc/rc.local file in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

echo never > /sys/kernel/mm/transparent_hugepage/enabled

TCP Backlog

In high requests-per-second environments, you need a high backlog in order to avoid slow clients connections issues. The /proc/sys/net/core/somaxconn is set to the lower value of 128 by default.

Solution

Raise the value of somaxconn based on your use case. Update /proc/sys/net/core/somaxconn and add 1024 . Note that the Linux kernel will silently truncate it to the value of/proc/sys/net/core/somaxconn so make sure to raise both the value of somaxconn and tcp_max_syn_backlog in order to get the desired effect.

If you’re at early adoption of Redis, having these items on your production readiness checklist can turn helpful. Are you new to Redis? Try some ping pong with Redis using an interactive tutorial.

Databases should be easy to deploy, easy to use, and easy to scale. If you agree, you should check out CockroachDB, a scalable SQL database built for businesses of every size. Check it out here. 

Topics:
redis ,nosql ,database ,configure ,transparent huge pages

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}