Reputation: 641
We have a parent bucket, say "Bucket-1". Then under this Bucket-1, we have multiple folders, 1 for each customer (say cust1, cust2, cust3 and so on). We upload Gb's of data into these folders for each customer. And this data is uploaded via multiple channels, via JS, via JAVA, via Swift IOS etc.
Requirement: Now we've got a requirement from a specific customer to upload it's data(forthecoming) into some other dedicated bucket say "Bucket-2" (so that some special permissions can be provided to the customer for read access via another AWS account of that customer).
Solution(s) that I came up with:**
Code modifications in all the channels (JS, Java, Swift) to accommodate this change. But this solutions is a) time taking, b) tightly coupling an upload logic with a specific requirement.
Using AWS lambda to move data between buckets. This lambda will be triggered with any operation done to that specific client's bucket folder.
For now #2 seems the best fit to me. Any suggestions? Or any other solution that comes up to your mind?
Upvotes: 1
Views: 1045
Reputation: 641
Anyone else with a same requirement, please refer the below post as well:
Upvotes: 0
Reputation: 269470
You can use Cross-Region Replication (CRR) - Amazon Simple Storage Service to automatically replicate an Amazon S3 bucket (or part of a bucket) to another bucket. This happens automatically but asynchronously.
The down-side is that it requires the other bucket to be in a different region (which is the primary use-case for CRR).
Alternatively, instead of copying the data to another bucket, you could use a Bucket Policy to grant the necessary access to the other account. This makes much more sense than duplicating read-only data purely to provide a different type of access. It is worth exploring this option before making a complicated option that would need on-going maintenance (and would cost more in storage costs).
Upvotes: 1