MySQL and MariaDB
Posted 10 February 2016
Phewy, where has the time gone? It's February already!
One of my clients is a fairly extensive MySQL user. They're mostly on Centos 6, and so are using some pretty old versions of MySQL. There was a requirement to replicate all that data off-site (and then back it up, send it off to an archive service, etc). Trying to get ahead of the upgrade game, I figured I'd spin up a Cents 7 box in their secondary datacentre and set to work. Of course though, MySQL turned into MariaDB between Centos 6 and 7. It takes a reasonable amount of fiddling about to get such a set up to work, although it's actually relatively straight forward. In general, forget about trying to do any binary file copies between versions, and instead do mysqldump/restore type imports. Also run mysql-upgrade liberally ;-)
The replica box has to run multiple instances of MySQL (I know there's a multi-site replication facility these days, but it's not got an especially good reputation, so I didn't try it out). Setting such things up is a relatively easy affair with Ansible, although it's necessary to make patched versions of some of the MariaDB scripts that it uses because for whatever reason, you can't supply config information to these scripts very easily. As a result, if your MariaDB data directory isn't in the standard location, then you're probably in for some patching. I found I had to patch these scripts:
Obviously, you'll need a custom .service file for Systemd too (which can be a template of the normal one that ships with MariaDB), and specify your patched scripts and other details in there.
Capacity planning was pretty hard for all this too - I really had no idea what to expect. Thankfully our VMs are pretty flexible, so I was able to scale up as I needed. I found that (approximately) you need about 300-350MB of RAM per instance. You need hardly any CPU (even if you're using stunnel links, as we are), but you do need a bit of disk space to keep the database and any backups you want.
In this particular case, we send encrypted backups of the data off to Amazon Glacier. I've written about Glacier before - it really is pretty easy to use, and is a pretty cheap service (so long as you're not doing lots of retrievals). In this case we're just using mysqldump and gzip to make 'blobs' of data, which we GPG encrypt before sending to Amazon.
More blog posts:Previous Post: Ansible | Next Post: Ansible Connected CheckMK