If you've been following the development of Redis for a while, you may have heard about Redis Cluster in the past - it's been around, to some degree, since 2011. According to Redis creator Salvatore Sanfilippo, it just wasn't the right time when it was first created:
I started the cluster project with a lot of rush, in a moment where it looked like Redis was going to be totally useless without an automatic way to scale.
It was not the right moment to start the Cluster project, simply because Redis itself was too immature, so we didn't had yet a solid “single instance” story to tell.
Well, now Redis Cluster actually exists! Sanfilippo describes it as follows:
Redis Cluster is basically a data sharding strategy, with the ability to reshard keys from one node to another while the cluster is running, together with a failover procedure that makes sure the system is able to survive certain kinds of failures.
He also gives a rundown of the way Redis Cluster is put together, describing the ramifications of eventual consistency and the "last failover wins" merge strategy, among other things.
Currently it's a minimum viable product - mostly for testing, but Sanfilippo suggests that it's ready for adoption in some cases - and he is already planning the next version and the various new features it might bring.
If you're interested, you can find Redis Cluster as a tarball here, or on GitHub as 3.0.0-rc1.