22 Aug 2015

howto backup huge MySQL servers?

How to backup huge mysql myisam databases with a massive load of tables?

I encountered the backup of database servers that hold on to some hundreds of database schemas with each about one hundred tables while having some hundred gigabytes of data in them as a big problem in mysql servers.

There are some issues with that:

  • shutting down mysql may take up to an hour until all opened file handles have been closed

  • mysqldump’ing all the databases and tables takes several hours

  • uptime is mission critical

Tonight I stumbled upon two approaches:

Filesystem Snapshots

I thought of that option before, though never followed that thought till the very end. While searching for other approaches that might seem fit for at least testing them out, I found this small line:

echo "FLUSH TABLES WITH READ LOCK; SYSTEM /path/to/helper/backupscript.sh; UNLOCK TABLES;" | mysql -u <user> -p <password>

(http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html at the third comment at the time of writing of this article)

In our case we could use this line as part of our backup script and instead of „/path/to/helper/backupscript.sh“ we would run a wrapper script to create a snapshot of the underlying LVM.

Maatkit’s mk-parallel-dump

The former maatkit was incorporated into the percona toolkit and the mk-parallel-dump and mk-parallel-restore were dropped. But luckily there are some backups on github. For example: https://github.com/ets/maatkit

There you can find a slightly enhanced version of the latest maatkit.

mk-parallel-dump might be an option for those older database servers, that lack a snapshot-able filesystem.

Disclaimer

Though I did not yet test any of these options myself. As soon as I tested them, I will do a new article on them.