Reputation: 1
I would like to load the data from mysql to redshift.
Here my data values can change at anytime. So I need to capture old records and new records as well into Redshift.
Here modified records need to be archive.Only new records reflect in Redshift.
For an example
MysqlTable :
ID NAME SAL
-- ---- -----
1 XYZ 10000
2 ABC 20000
For first load into Redshift(this should be same as Mysqltable)
ID NAME SAL
-- ---- ----
1 XYZ 10000
2 ABC 20000
for Second load(I changed salary of Employee 'XYZ' from 10000 to 30000 )
ID NAME SAL
-- ---- ----
1 XYZ 30000
2 ABC 20000
The above table should be reflect in Redshift and modified record (1 XYZ 10000) should be archive.
Is this possible?
Upvotes: 0
Views: 505
Reputation: 177
See this site https://flydata.com/blog/mysql-to-amazon-redshift-replication.
A better option is the Change Data Capture (CDC). CDC is a technique that captures changes made to data in MySQL and applies it to the destination Redshift table. It’s similar to technique mention by systemjack, but in that it only imports changed data, not the entire database.
To use the CDC method with a MySQL database, you must utilize the Binary Change Log (binlog). Binlog allows you to capture change data as a stream, enabling near real-time replication.
Binlog not only captures data changes (INSERT, UPDATE, DELETE) but also table schema changes such as ADD/DROP COLUMN. It also ensures that rows deleted from MySQL are also deleted in Redshift.
Upvotes: 0
Reputation: 3005
How many rows are you expecting?
One approach would be to add a timestamp column which gets updated to current time whenever a record is modified.
Then with an external process doing a replication run, you could get the max timestamp from Redshift and select any records from MySQL that are greater than that timestamp and, if you use the COPY method to load into Redshift, dump them to S3.
To load new records and archive old you'll need to use a variation of a Redshift upsert pattern. This would involve loading to a temporary table, identifying records in the original table to be archived, moving those to another archive table or UNLOADing them to an S3 archive, and then ALTER APPEND the new records into the main table.
Upvotes: 1