abhayAndPoorvisDad
abhayAndPoorvisDad

Reputation: 3997

Is it safe to execute mysqldump with --flush-logs when we have a slave running already

I have a live MySQL database which is configured as a master to a slave. This slave is already replicating from the master. Additionally, the slave is intentionally behind the master by 10 minutes. Now I have a need to take a mysql dump from the master to start another slave.

If I take a dump from the master using the mysqldump --flush-logs option, like so

$ mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A  > ~/dump.sql

would this be ok? My concerns are:-

Upvotes: 2

Views: 4366

Answers (1)

Slawek
Slawek

Reputation: 782

I think you have mistaken FLUSH with PURGE. Purpose of flush is to clear and re-load caches or put pending writes to disk. In mysql some writes are done on table close (for example), sometimes you need the data to be on-disk... FLUSH will ensure data is written.

Now "why bother"... in some cases you will want to start replication by dumping SQL and saving log position, so after you import the SQL to the slave you can start from exactly the place on which you took db snapshot to be sure data is not corrupted (eg. by running single query from master - multiple times on the slave).

BTW: --single-transaction without locks is unsafe for any DB which is having writes to myisam tables, you could get databases dumped in different state... and if you already have one slave (which i assume is working correctly), then why not dumping data from the slave using FLUSH TABLES WITH READ LOCK during the whole operation, which is most safe way and always works as intended. It also read-locks the whole server during the dump but if you have working slave anyway - why bother?

Upvotes: 2

Related Questions