Reputation: 147
I have an Amazon Aurora PostgreSQL-compatible database, running as a "live" pilot instance.
I'm planning a formal production transition for early next year, which I had imagined would include the creation of development and test instances, snapshot restores to get started, etc. Furthermore, I have an immediate need to make some data model refinements, which have potential impact on existing views and procedures, and am reluctant to do this in the "live" instance, albeit there's no direct impact of downtime at the moment.
I've read the Amazon docs about Aurora cloning, but have failed to find any "real-world" articles or posts about using it in practice. I see one non-Amazon article, which really just re-states the Amazon summary.
Does anyone have any direct experience of this capability? Or inside knowledge of the mechanics? Specifically:
I'm going to test it by creating an "old-fashioned clone" (snapshot restore to a new instance), then cloning that, but any insights in the meantime gratefully received!
Upvotes: 5
Views: 1420
Reputation: 663
We're using clones for development and staging copies of production as you describe and it works great, but as I said there is a scenario (schema change to large table) where we are seeing some very poor performance. Generally performance has been fine we don't see any notable difference in the performance of regular INSERTs, UPDATEs or DELETEs - it would likely be more noticeable if you ran a huge UPDATE that touched most of the rows in a large table, but for regular application work it performs well.
Upvotes: 1