TinkerTank
TinkerTank

Reputation: 5805

How to copy a large MySQL table into a live server without locking the target table

We have a service in production that writes independent (at least no relations) rows to a large (10-100G) InnoDB table. We want to migrate this service to a new DB host.

We started writing the data to the new DB:

but now we want to copy the old data into the new cluster, while it's already in production and while data is already being written into it. So, we need to prevent the taget table from locking, or from locking too long, while we perform the copy.

Upvotes: 0

Views: 284

Answers (1)

TinkerTank
TinkerTank

Reputation: 5805

First DISCLAIMER; Don't trust me, trust your local DBA! There's many subtleties to keeping a MySQL server / cluster running while moving large amounts of data around, and things can slow to a crawl or crash if you don't do it in exactly the right way for you data / situation.

We did the following:

mysqldump -h $olddbhost -p$pwd -u $user --no-create-info --single-transaction --skip-add-locks $dbname --net-buffer-length 800000 "table_name" | mysql -h $newdbhost -p$pwd -u $user $dbname

where:

  • --no-create-info prevents the dump from dropping and re-creating the target DB.
  • --single-transaction starts a transaction before reading the DB, preventing a lock on the source DB.
  • --skip-add-locks Doesn't lock the target table before writing.
  • --net-buffer-length Splits the queries that are sent to the target DB in chunks of x bytes. If not specified, will be read from the local mysql config.

Upvotes: 1

Related Questions