frosty
frosty

Reputation: 2649

Exporting and importing databases

I have a server on iPage, and I'm trying find a way to backup my database correctly. iPage gives us the option to backup all of our tables at once in one file, or one by one in separate files. The thing is though, I have a lot of rows in my tables, and many tables often exceed the import limit of 10.24 MB per file, not to mention my plan on iPage has a max query limit of 150,000 queries per hour. That means, even if no one visited my site in that hour, and using up queries that way, if the backup file has more than 150,000 rows, the import would be interrupted in the middle.

I know this question is definitely kind of vague, but what I'm trying to ask here is advice on what I should do instead, in this case, to back up my files successfully in a way that I can import my whole database successfully without it being interrupt in the middle of it, if that's possible. Any other advice about backing up database in general is appreciated too.

Upvotes: 0

Views: 68

Answers (2)

Drew
Drew

Reputation: 24949

I would create delta backups. Every table of significant size, or just across the board all of them, would contain the likes of a column named

dtUpdated datetime not null

Don't let the name suggest just for update. Rather, is is set to now() on any insert or update.

An auto_increment id in crucial tables certainly facilitates delta backups. I am not sure I would want to attempt it without such (though, it is not necessary). For the below I assume an auto_increment id is in place.

To prime it and get going, take your system offline for any new entry initially.

Use alter table to add any missing columns such as the two described above.

Set all of the rows with that column dtUpdated to something like

'2015-12-06 00:00:00' 

which would include the time component. Note that now() would not be a good choice at all for the priming part.

There would be another table that could be as small as 1 row, 1 column, with a datetime signifying the datetime of the last backup. Let's call this table LastBackup. This won't change until your next backup.

Set this initially, as above.

Have your code modified such than any insert or update would keep up to date the entry in dtUpdated with now()

Delta backups would be smaller, significantly so probably, picking up just those rows from tables where the dtUpdated is after the value in table LastBackup.

A trick would be to capture the deletes. As data is deleted, that fact is logged to tables. For instance, each table tracked, such as Transactions, could have a corresponding DeletedTransactions table. And they only need two columns (id, datetime). The delete tables to give them a name are brought over, and replayed.

Data is exported and imported via Select into Outfile and Load Data Infile, respectively. These are Comma Separated Value (CSV) files, and not of the format of mysqldump. The reason is performance, and an anecdotal example can be seen Here, albeit an extreme case.

How you deal with file maintenance of these including naming will dictate how easy it is for you to resurrect your system to any Savepoint on a timeline.

Resources:

A quick visual to get you going

create table LastBackup
(   theWhen datetime not null
);

create table Transactions
(   id int auto_increment primary key,
    custId int not null,
    transTyp int not null,
    amount decimal(12,2) not null,
    dtUpdated datetime not null,
    key (dtUpdated)
);

create table DeletedTransactions
(   -- a delete trigger on Transactions table does an insert here for the id
    id int not null,        -- the Transactions.id value
    dtWhen datetime not null    -- when it happened
);


truncate table Transactions;
insert Transactions (custId,transTyp,amount,dtUpdated) values
(123,1,100,'2014-07-11 14:10:00'),
(111,2,2100,'2014-07-11 14:10:01');
-- imagine if you had 10 million of those rows

-- time for the first backup, getting everything
truncate table LastBackup;
insert into LastBackup (theWhen) values ('2015-12-06');

select id,custId,transTyp,amount,dtUpdated
into outfile 'c:\\dev\\transactions_20151206.txt' -- or /path/to/file/xxx.txt for linux
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n' -- or '\n' for linux
FROM transactions
where dtUpdated<'2015-12-06';   -- this will be a range on subsequent delta backups

Here is that text file just exported:

1,123,1,100.00,"2014-07-11 14:10:00"
2,111,2,2100.00,"2014-07-11 14:10:01"

Upvotes: 1

thepiyush13
thepiyush13

Reputation: 1331

I would suggest using mysqldump utility if your hosting provider supports it. You can limit number of queries :

mysqldump --databases X --tables Y --where="1 limit xxx"
xxx = your query limit

http://shiplu.mokadd.im/110/splitting-huge-mysql-dump-for-easy-restore/

otherwise you can simply select limited number of rows using select statements and export them based on your export size limit

Upvotes: 0

Related Questions