Reputation: 5805
We have a service in production that writes independent (at least no relations) rows to a large (10-100G) InnoDB table. We want to migrate this service to a new DB host.
We started writing the data to the new DB:
ALTER TABLE xxx AUTO_INCREMENT=
on the new DB table, with an auto-increment high enough for all old data to fit before the new databut now we want to copy the old data into the new cluster, while it's already in production and while data is already being written into it. So, we need to prevent the taget table from locking, or from locking too long, while we perform the copy.
Upvotes: 0
Views: 284
Reputation: 5805
First DISCLAIMER; Don't trust me, trust your local DBA! There's many subtleties to keeping a MySQL server / cluster running while moving large amounts of data around, and things can slow to a crawl or crash if you don't do it in exactly the right way for you data / situation.
We did the following:
mysqldump -h $olddbhost -p$pwd -u $user --no-create-info --single-transaction --skip-add-locks $dbname --net-buffer-length 800000 "table_name" | mysql -h $newdbhost -p$pwd -u $user $dbname
where:
--no-create-info
prevents the dump from dropping and re-creating the target DB.--single-transaction
starts a transaction before reading the DB, preventing a lock on the source DB.--skip-add-locks
Doesn't lock the target table before writing.--net-buffer-length
Splits the queries that are sent to the target DB in chunks of x bytes. If not specified, will be read from the local mysql config.Upvotes: 1