Reputation: 660
We use mysql(AWS aurora) to store data of our online payment transactions. One of our tables, in which each row stores information of a particular transaction, has more than 1 billion rows.
How can I go about adding a new attribute for a transaction ? Altering this table is not possible because of large amount of time required to do so.
Only possible solution seems to be creating a new table which stores key-value pairs for each transaction. Are there other more efficient ways to do this, assuming altering table structure is not possible ?
Upvotes: 2
Views: 1501
Reputation: 562280
At my work, we have quite a few tables with over 1 billion rows. Developers add or remove columns, change data types, add or remove indexes, etc. Any kind of ALTER TABLE.
The way we do this is to use pt-online-schema-change, a free tool available from Percona. It allows you to do long-running schema changes, and you can still read and write the table while it's doing the change in the background.
It still takes a long time to do a change to a large table. In the largest cases, it takes weeks. But it doesn't block your work in the meantime.
Upvotes: 2
Reputation: 142278
An alternative is to create a parallel table. It would have the same PRIMARY KEY
as your current table (but without AUTO_INCREMENT
). And it would have the 'new' column(s).
Then you would JOIN
on the PK to fetch both old and new columns at the same time.
Pros: No downtime, no big ALTER, etc.
Cons: Now the table is split in two. Subsequent columns being added go through the same dilemma.
Alternative to the alternative: Put a JSON column in that new table.
Pros: Very open-ended wrt adding more columns.
Cons: Can't index it very well. (This depends on what version you are using.)
Upvotes: 2