Reputation: 1127
I use two different ways to backup my mysql database. mysqldump with --all-databases is much faster and has a far better performance than a loop with to dump every database in a single file. Why? And how to speed up performance for the looped version
/usr/bin/mysqldump --single-transaction --all-databases | gzip > /backup/all_databases.sql.gz
and this loop over 65 databases even with nice:
nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c xxx -q > /backup/mysql/xxx_08.sql
nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx -q > /backup/mysql/dj-xxx_08.sql
nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx-p -q > /backup/mysql/dj-xxx-p_08.sql
nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-foo -q > /backup/mysql/dj-foo_08.sql
mysqldump.cnf is only used for the authentication, there are no additional options there.
Upvotes: 0
Views: 1152
Reputation: 31723
There are many differences.
In A you are writing to gzip which compresses the data before writing to disk. B writes plain sql files which can be 5-10 times bigger (results from my database). If your performance is disk bound this could be the solution
-c = "full inserts" is not specified in A
-q is not specified in A
for large databases INFORMATION_SCHEMA
queries can be a pain with mysql (try executing SELECT * FROM information_schema.columns
. For B every dump has to do these queries while A has to do this only once.
Upvotes: 2