parserr
parserr

Reputation: 729

Piping mysqldump to mysql

Sometimes I need to copy MySQL database (db1) to another database (db2). I found this command to be concise and effective:

mysqldump --opt db1 | mysql db2

It was working fine, but now it breaks with following error:

ERROR 1064 (42000) at line 1586: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'some_table_name'': MySQL server ' at line 1

First thing that comes to mind is that database is too big (uncompressed SQL dump is >1G, 1090526011 bytes at the moment, to be precise) for piping it like this. When I do mysqldump > file and then mysql < file it works fine, no errors. Table mentioned in error message (some_table_name) is not big or special.

Second idea comes from the impression that error message might be truncated, and that it says

"...MySQL server has gone away"

Quick research on that says it's possible that maximum number of open files (for MySQL and/or system) is reached. So I've tried adding --skip-lock-table to mysqldump and raising open-files-limit, but no luck, same error.

Obvious solution is to do dump and then import (as it works fine), but piping seems better and more clean to me (let me know if I'm wrong), plus I'm curious to find out what causes this problem. Did I hit some limit that affects command piping?

I've been doing this on hosting server, running MySQL 5.1.60 on Linux and on my dev machine - MySQL 5.1.58 on Linux. Latter gives a bit different error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table other_table_name at row: 7197


UPDATE: Problem is solved by doing separate dump and import, without pipe. Even though I feel it's not really answer to my question, ssmusoke's suggestions were most to the point resulting in accepted answer.

Upvotes: 20

Views: 28949

Answers (5)

Jauyzed
Jauyzed

Reputation: 511

it is possible that the backup is hitting the MySQL timeout limits.

The variables can be changed in my.cnf

net_read_timeout = 120 net_write_timeout = 900

If you prefer to change these settings without needing to reboot MySQL you can do that with the following SQL statements:

set global net_read_timeout = 120; set global net_write_timeout = 900;

^ you might need super privilege

Upvotes: 0

txyoji
txyoji

Reputation: 6848

"MySQL server has gone away" is a symptom of a max packet error. http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

Modify your command to specify a larger value for max_allowed_packet.

mysqldump --opt db1 | mysql --max_allowed_packet=32M db2

The default is 1M. It may take trial and error to get the right value. http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet

Upvotes: 13

Aaron Brown
Aaron Brown

Reputation: 334

The problem is that you are redirecting stderr to stdout, thus any errors are being interpreted as SQL. Remove 2>&1. Then the real error will show up.

Upvotes: 2

Stephen Senkomago Musoke
Stephen Senkomago Musoke

Reputation: 3523

The problem may be that the load on the servers gets too high doing both dumping and loading at the same time. This also means that you lose some optimizations like extended inserts, ability to disable foreign keys which can be achieved when you dump a file then import it.

I would recommend that you use mysqldump to generate the backup then load it using mysql. That way the load on your server is reduced, and like you said it always works. You can even automate it into a bash script to do both so that you do not need to execute the mysqldump and loading commands.

Upvotes: 4

John Retterer
John Retterer

Reputation: 21

Do you need to redirect the stderr stream as well as the stdout from the mysqldump? Error messages may be getting interleaved with the dump output. Try

mysqldump --opt db1 | mysql db2

Upvotes: 2

Related Questions