Posted 23 April 2015

I've been playing around with Redis lately. We use Redis as a 'stash' of data from Puppet and Mcollective (one, independent master on each Puppet master). The data is refreshed pretty regularly, so data security isn't much of a concern, and all the clients put their data into all the masters so they're all (in theory) identical. However, we're now using the same Redis as a data store for some related applications so we need a single data store, rather than multiple identical masters.

Initially I was able to make Redis into a master/slave replication cluster pretty quickly. Redis is able to replicate a small master to a slave almost instantly so this works really well for us. I got (most of) out clients to use the single master and all was good. However, I was also asked to use Redis Sentinel to provide failover. I should point out that I'm not aware of any of our Redis databases ever going down unexpectedly, so failover isn't something we expect to use very often.

Redis Sentinels are something relatively new. They're an independent process that you have to point at the master Redis. They then follow the slave configs to 'discover' all the nodes in the group. From then on, they'll monitor the instances and fail the master over to one of the slaves if it fails. From what I can tell, it's not a flawless operation, but it works most of the time and does so really fast (promoting a slave to a master in Redis is unbelievably simple and fast). The Sentinels can be configured to work together to only perform a failover when enough of them agree so you can avoid 'split brain' problems and accidental failovers. In our case, we've only got two boxes so the voting mechanisms are a bit wasted on us, but it works well even still.

There's a learning step that doing this requires. As soon as you manually make a master into a slave, or you have a failover or whatever, the Sentinals and Redis databases will write over the top of their own config files. If (like us) you control those files with Puppet, then you'll get all kinds of trouble as Puppet and Redis battle it out to have their version of the file persisted. We now Puppet install a template, and then have the template copied to a 'live' config file (which Puppet doesn't know anything about). This seems to work well enough, although making substantial changes to the config isn't quite so easy.

All in all, Redis seems like a pretty good little product. It's not a 'real' database so I wouldn't trust it with data I couldn't re-create (without spending the time ot develop a backup solution). However, because it's a simple, stripped down data store, it can do things that 'real' databases can only dream of. I'm left wondering how else I might be able to use Redis in the future. I can't think of much right now, but I'm sure there'll be something...

Tags: #redis

More blog posts:

Previous Post: Puppet Beaker  |  Next Post: Mcollective

Blog Archive