Aditya Anand
Aditya Anand

Reputation: 906

How to split up an SQL database into multiple files (to be recombined later)?

I'm working on a rather data-heavy wordpress website locally. I'm ready to push to production but the database is 18 gigabytes (just the one table, one .ibd file). (It's 10.1.21-MariaDB)

The problem is, my upload speed is rather terrible. So I'd really rather not upload the file for like 2 days straight only to have it error out at 99% making me start all over again.

So, is there a way to split up the database into manageable chunks of, say 1 GB, that I can then upload one at a time and recombine back into the original form (complete working database) on the server?

Upvotes: 0

Views: 2345

Answers (2)

Sohan.Choudhury
Sohan.Choudhury

Reputation: 221

Create a text file using OUTFILE as follows:

Example:

SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
  FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  LINES TERMINATED BY '\n'
  FROM customers;

Use Mysql loader to load the data into table:

LOAD DATA INFILE '/tmp/expense_upload.csv'
INTO TABLE expenses (ss_id, user_id, cost, context, date);

Upvotes: 1

tadman
tadman

Reputation: 211600

Don't chunk. Use rsync which has support for compression, picking up where it left off if your connection drops out, and most importantly for low-bandwidth situations, ways of regulating how quickly it's uploading.

For example:

rsync -avzP --bwlimit=1000 db/ remote:db/

Where db/ is the database directory to be copied and remote is the destination server. You'll want to read up very carefully on how to set the source and destination paths so you don't inadvertently make two copies or dump it in the wrong place.

Another thing to keep in mind is you may want to use a tool like mysqldump to extract out only the data. The MySQL data directory contains a lot of extra junk that's not necessarily important, it can be rebuilt. This includes the transaction journal which can get quite large and the indexes on your data which can easily eclipse the data in terms of size.

A compressed .sql file is often a fraction of the size of the actual MYSQL data directory even when it contains all of the data. It's just a more compact representation. Remember the database stores things in a method that's performant for reading and writing, not optimizing for storage.

You can also use tools like xtrabackup which can create consistent snapshots of a running InnoDB-backed database. Copying the data of a running database can be hugely problematic, those files are constantly being modified. The xtrabackup tool creates a snapshot that's tested for consistency, something it can do because it has a working embedded InnoDB engine for examining the data files and manipulating their state in the copy as necessary.

Upvotes: 2

Related Questions