Reputation: 65
Here's what I want to do with (MySQL example):
I have a problem with #4 step.
For instance I split table1.sql into 3 files: table1_a.sql and table1_b.sql and table1_c.sql. If on new dump there are new records that is fine - it's just added to table1_b.sql.
But if there are deleted records that were in table1_a.sql all next records will move and git will treat files table1_b.sql and table1_c.sql as changed and that not OK.
Basicly it destroys whole idea keeping sql backup in SCM.
My question: How to split big sql dump file into small chunks and maintain each record in origin files despite later other records deletions?
Upvotes: 2
Views: 4263
Reputation: 1145
To Split SQL Dumps in files of 500 lines execute in your terminal:
$ split -l 5000 hit_2017-09-28_20-07-25.sql dbpart-
Upvotes: 2
Reputation: 18408
Don't split them at all. Or split them by ranges of PK values. Or split them right down to 1 db row per file (and name the file after tablename + the content of the primary key).
(That apart from the even more obvious XY answer, which was my instinctive reaction.)
Upvotes: 2