Reputation: 1170
I'm trying to write a bash script which will, given the name of a local MySQL database, export the data from all its tables into CSV files with a header line containing field names. For example, if the database has tables customers
, orders
, and inventory
, i want to dump three files customers.csv
, orders.csv
and inventory.csv
, including the field names in headers of each file.
Tinkering with mysqldump, I've managed to produce all the .csv files I need, but without the field name header line. The command also creates a set of .sql files containing only the table structure, in SQL commands. After googling around a lot, I can't find anyone who's been able to solve this problem without "reinventing the wheel" and writing their own MySQL dump script. mysqldump is great, it just lacks this one little feature. And all the field names I need are right there in those SQL files, it should just be a matter of parsing them out and prepending a line of field names to each CSV file, right?
My problem: I'm a shell scripting newbie, and I have no idea how to go about this.
Here's the mysqldump command I'm currently using:
mysqldump --host=localhost --user=myusername --password=mypassword \
--tab=/tmp/db/ --verbose mydatabase \
--fields-enclosed-by=\" --fields-terminated-by=,
let's say the database mydatabase has the tables customers
orders
and inventory
. This command will produce six files in the directory /tmp/db: customers.sql
, customers.txt
, orders.sql
, orders.txt
, inventory.sql
, inventory.txt
. (the txt files are CSV files, my script later changes the file extension to .csv)
The .sql files look like this (using customers as an example):
-- MySQL dump 10.13 Distrib 5.1.54, for debian-linux-gnu (i686)
--
-- Host: localhost Database: mydatabase
-- ------------------------------------------------------
-- Server version 5.1.54-1ubuntu4
/* (i removed some generated comments here) */;
--
-- Table structure for table `customers`
--
DROP TABLE IF EXISTS `customers`;
/* (i removed some generated comments here) */;
CREATE TABLE `customers` (
`customer_id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(256) NOT NULL,
`last_name` varchar(256) NOT NULL,
`email` varchar(256) NOT NULL,
`phone` varchar(20) NOT NULL,
`notes` longtext NOT NULL,
PRIMARY KEY (`customer_id`)
) ENGINE=MyISAM AUTO_INCREMENT=1865 DEFAULT CHARSET=latin1;
/* (i removed some generated comments here) */;
-- Dump completed on 2011-05-01 13:03:02
and the .txt files look as you would expect a CSV to look (lines of "data","like","this") but with no field name headers.
As you can see, the field names I need are right there in the .sql files after "CREATE TABLE...".
The ideal script I'm trying to write would do the following:
Any tips? I'll be tinkering with this all day until I figure it out.
Upvotes: 1
Views: 4307
Reputation: 7041
set group_concat_max_len = 5000;
set @qry = (select concat("select ",group_concat(CONCAT('''', column_name, '''' ) ), " UNION SELECT * FROM " ,table_name, ' INTO OUTFILE ', " '/tmp/test.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' ESCAPED BY '\"' LINES TERMINATED BY '\\n' " ) from information_schema.columns where table_schema = database() and table_name = 'spree_users');
prepare stmt from @qry;
execute stmt;
deallocate prepare stmt;
Upvotes: 0
Reputation: 7324
I couldn't find a way to do this, given that:
I did a standard SQL dump and grep'd the file into CSV format.
Upvotes: 0
Reputation: 56397
I made a simple test under windows.
create database if not exists test;
use test;
create table csv_header(
id int not null auto_increment primary key,
fname varchar(50),
lname varchar(50),
dob date)
engine = myisam;
insert into csv_header (fname,lname,dob) values
('nick','smith','2000-12-05'),
('john','white','1990-12-05');
set @str = (select concat("select * from (select ", group_concat(concat("'",column_name,"'"))," union
select * from ", table_name, ") as t into outfile 'd:/",table_name,".txt'
fields terminated by ','
lines terminated by '\r\n'")
from information_schema.columns
where table_schema = 'test' and table_name = 'csv_header'
order by ordinal_position);
-- select @str;
prepare stmt from @str;
execute stmt;
deallocate prepare stmt;
And this is the content of my csv_header.txt:
id,fname,lname,dob
1,nick,smith,2000-12-05
2,john,white,1990-12-05
If this is what you're looking for, it's simple to create a stored procedure with a cursor that loops all the tables within the schema and do the same thing for each one. Let me know. :)
Upvotes: 2