Patrick Suter
Patrick Suter

Reputation: 285

S3 Copy Object with new metadata

I am trying to set the Cache-Control Header on all our existing files in the s3 storage by executing a copy to the exact same key but with new metadata. This is supported by the s3 api through the x-amz-metadata-directive: REPLACE Header. In the documentation to the s3 api compatability at https://docs.developer.swisscom.com/service-offerings/dynamic.html#s3-api the Object Copy method is neither listed as supported or unsupported.

The copy itself works fine (to another key), but the option to set new metadata does not seem to work with either copying to the same or a different key. Is this not supported by the ATMOS s3-compatible API and/or is there any other way to update the metadata without having to read all the content and write it back to the storage?

I am currently using the Amazon Java SDK (v. 1.10.75.1) to make the calls.

UPDATE:

After some more testing it seems that the issue I am having is more specific. The copy works and I can change other metadata like Content-Disposition or Content-Type successfully. Just the Cache-Control is ignored.

As requested here is the code I am using to make the call:

BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, sharedsecret);
AmazonS3 amazonS3 = new AmazonS3Client(awsCreds);
amazonS3.setEndpoint(endPoint);

ObjectMetadata metadata = amazonS3.getObjectMetadata(bucketName, storageKey).clone();
metadata.setCacheControl("private, max-age=31536000");
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName, storageKey, bucketName, storageKey).withNewObjectMetadata(metadata);
amazonS3.copyObject(copyObjectRequest);

Maybe the Cache-Control header on the PUT (Copy) request to the API is dropped somewhere on the way?

Upvotes: 2

Views: 3493

Answers (1)

gsmachado
gsmachado

Reputation: 197

According to the latest ATMOS Programmer's Guide, version 2.3.0, Table 11 and 12, there's nothing specified that COPY of objects are unsupported, or supported either.

I've been working with ATMOS for quite some time, and what I believe is that the S3 copy function is somehow internally translated to a sequence of commands using the ATMOS object versioning (page 76). So, they might translate the Amazon copy operation to "create a version", and then, "delete or truncate the old referenced object". Maybe I'm totally wrong (since I don't work for EMC :-)) and they handle that in a different way... but, that's how I see through reading the native ATMOS API's documentation.

What you could try to do: Use the native ATMOS API (which is a bit painful, yes, I know), and then, create a version of the original object (page 76), update the metadata of such version (User Metadata, page 12), and then restore the version to the top-level object (page 131). After that, check if the metadata will be properly returned in the S3 API.

That's my 2 cents. If you decide to try such solution, post it here if that worked.

Upvotes: 2

Related Questions