Oscar Foley
Oscar Foley

Reputation: 7025

How can I assign bucket-owner-full-control to millions of already existing files in a s3 bucket?

I have an s3 bucket with millions of files copied there by a Java Process I do not control. Java Process is executed in a EC2 in "AWS Account A" but writes to a bucket owned by "AWS Account B". B was able to see the files but not to open them.

I figured out what was the problem and requested a change in Java Process to write new files with "acl = bucket-owner-full-control"... and it works! New files can be read from "AWS Account B".

But my problem is that I still have millions of files with incorrect acl. I can fix one of the old files easily with

aws s3api put-object-acl --bucket bucketFromAWSAccountA--key datastore/file0000001.txt --acl bucket-owner-full-control

What is the best way to do that? I was thinking in something like

# Copy to TEMP folder
aws s3 sync s3://bucketFromAWSAccountA/datastore/ s3://bucketFromAWSAccountA/datastoreTEMP/ --recursive --acl bucket-owner-full-control
# Delete original store
aws s3 rm s3://bucketFromAWSAccountA/datastore/
# Sync it back to original folder
aws s3 sync s3://bucketFromAWSAccountA/datastoreTEMP/ s3://bucketFromAWSAccountA/datastore/ --recursive --acl bucket-owner-full-control

But it is going to be very time consuming. I wonder if...

Upvotes: 0

Views: 3427

Answers (1)

David Sette
David Sette

Reputation: 753

One option seems to be to recursively copy all objects in the bucket over themselves, specifying the ACL change to make.

Something like: aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE

That code snippet was taken from this answer: https://stackoverflow.com/a/63804619

It is worth reviewing the other options presented in answers to that question, as it looks like there is a possibility for losing content-type tags or metadata information if you don't form the command properly.

Upvotes: 2

Related Questions