Reputation: 1596
Saw this: Need a better way to manage database schema changes
Anything that can be done for MySQL?
Right now, if there's a schema change, I have to take down prod for a bit, look at the diffs, apply the changes manually, and then run data migration/conversion scripts.
Would love to know if there are methods/tools out there that can lessen the pain.
Upvotes: 5
Views: 12138
Reputation: 21
Try bytebase (bytebase.com)
It's web-based and Open-sourced.
For your question,
Method 1: Create a dev-prod pipeline. Method 2: Do schema synchronization.
You can even do GitOps workflow, which means you commit sql scripts to GitHub/GitLab, then the commit will trigger script execution automatically, and then write back to LATEST sql to record the snapshot.
Upvotes: 0
Reputation: 1487
I think a good approach is to make the changes to databases like you do changes to code. Have them in source control and have the deployments be good and repeatable.
My philosophy on this kind of thing is like this pattern on DevOpsWire: https://web.archive.org/web/20111025093215/http://devopswire.com/patterns/database-changes-as-code
Tool wise, look at things like DB-Deploy and Liquibase.
Upvotes: 1
Reputation: 32094
why not to accumulate the changes to the development schema in an update script and just run the script on the next release.
There realy are different schema comparison tools, but, on my opinion, they should be used only to check if the update script is correct, not to generate the script.
And on release you should commit the script that generates the new schema and the update script as empty to a version control system.
Suppose here is your schema:
-- schema.sql
CREATE TABLE t1 (
`t_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`t_data` VARCHAR(45) NOT NULL,
PRIMARY KEY (`t_id`)
) ENGINE = InnoDB COLLATE utf8_feneral_ci;
Commit 1
Now you have a lot of data in your table in production with a lot of duplicates, and you would like to make some normalization:
Scripts:
-- updates.sql
CREATE TABLE t2 (
`d_hash` CHAR(32) NOT NULL COLLATE ascii_general_ci,
`t_data` VARCHAR(45) NOT NULL,
PRIMARY KEY (`d_hash`)
) ENGINE = InnoDB COLLATE utf8_general_ci;
ALTER TABLE t1
ADD COLUMN `d_hash` CHAR(32)COLLATE ascii_general_ci AFTER `t_data`;
UPDATE t1 SET d_hash = MD5(UPPER(t_data));
INSERT IGNORE INTO t2 (t_data, d_hash)
SELECT t_data, d_hash
FROM t1;
ALTER TABLE t1 DROP COLUMN `t_data`,
MODIFY COLUMN `d_hash` CHAR(32) COLLATE ascii_general_ci NOT NULL,
ADD CONSTRAINT `FK_d_hash` FOREIGN KEY `FK_d_hash` (`d_hash`)
REFERENCES `t2` (`d_hash`) ON DELETE CASCADE ON UPDATE CASCADE;
Commit 2
Release
-- schema.sql
CREATE TABLE t1 (
`t_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`d_hash` CHAR(32) COLLATE ascii_general_ci NOT NULL,
PRIMARY KEY (`t_id`),
KEY `FK_d_hash` (`d_hash`),
CONSTRAINT `FK_d_hash` FOREIGN KEY (`d_hash`)
REFERENCES `t2` (`d_hash`)
ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB COLLATE utf8_general_ci;
-- updates.sql
-- Empty
Commit 3
I would like to look at a comparison tool that will allow you do this much simplier.
Upvotes: 2
Reputation: 5987
The same approaches define over Need a better way to manage database schema changes Compare two MySQL databases would work perfectly.
You can also check the Comparison related posts here.
More to this is always good to maintain change script that you may make while in staging or pre-deployments and execute them at one go when move to production.
Upvotes: 0