user3343396
user3343396

Reputation: 765

AWS S3 Node.js - Set Content-Disposition for object

I searched a lot on the internet about this but couldn't find a solution. I'm using pre-signed URLs to give the user the ability to upload files to S3. But, I want to be able to do something like this:

export const getSignedUrlForUpload = async (destFilePath) => {
    const s3 = getS3Object({ signatureVersion: 'v4' });

    const params = {
        Bucket: process.env.AWS_BUCKET_NAME,
        Key: destFilePath,
        ContentDisposition: 'attachment; filename=data.jpg',
        ACL: "public-read",
    };

    return new Promise((resolve, reject) => {
        s3.getSignedUrl('putObject', params, (err, data) => {
            if (err) return reject(err);
            return resolve(data);
        });
    });
};

In order to make the file downloadable with my desired name. It doesn't work. How can I achieve that?

Upvotes: 2

Views: 4183

Answers (1)

Andre.IDK
Andre.IDK

Reputation: 2037

Your code should work already as it is, I have tried with a regular putObject (the getSignedUrl supports the same parameters) and here are the results.

I upload the object:

const params = {
  Bucket: 'my_bucket',
  Key: 'MDSD.pdf', // This could be anything with standard chars
  Body: fileContent,
  ACL: 'public-read',
  ContentDisposition: 'attachment; filename=test.pdf',
};

s3.upload(params, function (err, data) {
  if (err) {
    throw err;
  }
  console.log(`File uploaded successfully. ${data.Location}`);
});

On the S3 Console it looks like this: screenshot of AWS S3 Console - Metadata section

If we check the object info via AWS CLI it looks like this:

aws s3api head-object --bucket my_bucket --key MDSD.pdf
// output
AcceptRanges: bytes
ContentDisposition: attachment; filename=test.pdf
ContentLength: 5067743
ContentType: application/pdf
ETag: '"xxxxxxxxxxxxxxxxxx"'
LastModified: '2020-11-26T15:59:02+00:00'
Metadata: {}

If I try to download the file with wget, using the --content-disposition flag, you can see that the suggested filename is respected:

wget https://my_bucket.s3.eu-central-1.amazonaws.com/MDSD.pdf --content-disposition
// output
--2020-11-26 17:00:32--  https://my_bucket.s3.eu-central-1.amazonaws.com/MDSD.pdf
Resolving my_bucket.s3.eu-central-1.amazonaws.com... 10.119.74.124
Connecting to my_bucket.s3.eu-central-1.amazonaws.com|10.119.74.124|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5067743 (4.8M) [application/pdf]
Saving to: 'test.pdf'

test.pdf            100%[===================>]   4.83M  8.85MB/s    in 0.5s    

2020-11-26 17:00:33 (8.85 MB/s) - 'test.pdf' saved [5067743/5067743]

I have also made another test, this time using exactly your code and then trying to upload a file programmatically with the axios library:

const getUrlAndUpload = async () => {
  // Get Url (your code)
  let presignedUrl = await getPresignedUrl();

  // Read content from the file
  const fileContent = fs.readFileSync('any_name.pdf');

  var options = {
    headers: {
      'Content-Disposition': 'attachment; filename=test.pdf',
      'x-amz-acl': 'public-read',
    },
  };

  let response;
  try {
    response = await axios.put(presignedUrl, fileContent, options);
  } catch (err) {
    console.log(err);
    process.exit(1);
  }
  console.log(response);
  process.exit(0);
};

console.log(getUrlAndUpload());
// {status: 200, statusText: 'OK', headers: {…}, config: {…}, request: ClientRequest, …}

The important detail (I also stumbled in this while testing) is that while uploading you need to specify always the same parameters you're signing + add the x-amz-acl one that in this case is public-read. If for instance you add Content-Type while signing the URL, then you'll have to set Content-Type also in the headers of the upload request.

Upvotes: 3

Related Questions