Reputation: 537
We'd like to use Javascript AWS SDK to upload files to S3, but without using credentials at all. Uploading using credentials works, but we cannot generate an AWS IAM user for each of our app users (or should we?)
Therefore, similar to using GET, we'd like the server to generate a pre-signed URL, send it to browser, and have the browser upload to that URL.
However, there are no examples on how to accomplish this. Also, if not setting a credential, even before making the upload to S3 request, the SDK errors with
code: "CredentialsError"
message: "No credentials to load"
The JS SDK docs mention this, so it seems it would be possible:
Pre-signing a putObject (asynchronously)
var params = {Bucket: 'bucket', Key: 'key'};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
});
Upvotes: 48
Views: 50983
Reputation: 5590
Generate Url
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Put you region
accessKeyId: 'AKXXXXXXXXXXXXXXXA6U', // Put you accessKeyId
secretAccessKey: 'kzFHoXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssoGp', // Put you accessKeyId
Bucket: 'Bucket-Name', // Put your bucket name
signatureVersion: 'v4',
region: 'ap-south-1' // Put you region
});
const getSingedUrlforPut = async () => {
const params = {
Bucket: 'Bucket-Name',
Key: '317ec11af14a46b89f400bcf8f9fff1222.pdf',
Expires: 60 * 5
};
try {
const url = await new Promise((resolve, reject) => {
s3.getSignedUrl('putObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(url)
} catch (err) {
if (err) {
console.log(err)
}
}
}
getSingedUrlforPut()
Upload file Via ajax
var form = new FormData();
form.append("", fileInput.files[0], "director_pan_af8ef2d261c46877f95038622c96e7c0.pdf");
var settings = {
"url": "https://sme-testing.s3-ap-south-1.amazonaws.com/317ec11af14a46b89f400bcf8f9fff1222.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIXXXXXXXXXXXX6U%2F20200525%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20200525T083419Z&X-Amz-Expires=300&X-Amz-Signature=ea063731d7d043b62d0dc7c0984f4d5792c7f7f41e9ffb52a97d62adadcef422&X-Amz-SignedHeaders=host",
"method": "PUT",
"timeout": 0,
"processData": false,
"mimeType": "multipart/form-data",
"contentType": false,
"data": form
};
$.ajax(settings).done(function (response) {
console.log(response);
});
Upvotes: 1
Reputation: 61
Please add ACL
and ContentType
, it'll make it work.
const param = {
Bucket: 'Bucket',
Key: 'fiileName',
ACL: 'public-read',
ContentType: 'fileType'
};
s3.getSignedUrl('putObject', param, function (err, url) {
console.log('The URL is', url);
});
Upvotes: 2
Reputation: 6675
If you're not using jQuery, this is the minimal you need on the front end:
var xhr = new XMLHttpRequest();
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader('Content-Type', signedUrlContentType);
xhr.onload = () => {
if (xhr.status === 200) {
// success!
}
};
xhr.onerror = () => {
// error...
};
xhr.send(file); // `file` is a File object here
See File object docs: https://developer.mozilla.org/en-US/docs/Web/API/File
Then you can add your upload progress as usual:
xhr.upload.onprogress = (event) => {
if (event.lengthComputable) {
var percent = Math.round((event.loaded / event.total) * 100)
console.log(percent);
}
};
Upvotes: 18
Reputation: 2579
Quiet the old question but it did help me a bit to get it finally done. My solution is based on PHP and JavaScript with jQuery.
I have the entire solution nicely wrapped at https://github.com/JoernBerkefeld/s3SignedUpload but here are the essentials:
api.php:
<?php
require_once '/server/path/to/aws-autoloader.php';
use Aws\Common\Aws;
$BUCKET = "my-bucket";
$CONFIG = "path-to-iam-credentials-file-relative-to-root.php"
function getSignedUrl($filename, $mime) {
$S3 = Aws::factory( $CONFIG )->get('S3');
if(!$filename) {
return $this->error('filename missing');
}
if(!$mime) {
return $this->error('mime-type missing');
}
$final_filename = $this->get_file_name($filename);
try {
$signedUrl = $S3->getCommand('PutObject', array(
'Bucket' => $BUCKET,
'Key' => $this->folder . $final_filename,
'ContentType' => $mime,
'Body' => '',
'ContentMD5' => false
))->createPresignedUrl('+30 minutes');
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
$signedUrl .= '&Content-Type='.urlencode($mime);
return $signedUrl;
}
echo getSignedUrl($_GET['filename'],$_GET['mimetype']);
please make sure to add user authentication to your api.php. Else everyone who knows the path to that file could upload files to your bucket.
credentials.inc.php:
<?php
return array(
'includes' => array('_aws'),
'services' => array(
'default_settings' => array(
'params' => array(
'key' => 'MY-ACCESS-KEY',
'secret' => 'MY-SECRECT',
'region' => 'eu-west-1' // set to your region
)
)
)
);
client.js:
$("input[type=file]").onchange = function () {
for (var file, i = 0; i < this.files.length; i++) {
file = this.files[i];
$.ajax({
url : s3presignedApiUri,
data: 'file='+ file.name + '&mime=' + file.type,
type : "GET",
dataType : "json",
cache : false,
})
.done(function(s3presignedUrl) {
$.ajax({
url : s3presignedUrl,
type : "PUT",
data : file,
dataType : "text",
cache : false,
contentType : file.type,
processData : false
})
.done(function(){
console.info('YEAH', s3presignedUrl.split('?')[0].substr(6));
}
.fail(function(){
console.error('damn...');
}
})
}
};
s3 cors settings (PUT & OPTIONS are actually needed, but cannot enable OPTIONS directly...):
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Upvotes: 27
Reputation: 6312
In project, on what I am working right now I have file uploads from client directly to S3, in my case it works in few steps:
There is main code parts from it: https://gist.github.com/zxbodya/3cdabd9172bcc89f8ac5
Upvotes: 6
Reputation: 537
I prefer this cleaner approach, via github:
If you already have a presigned URL generated for the browser, you can simply send an XHR request with that URL and the payload to upload to S3. The SDK would not be required to do this. A jQuery example below:
$.ajax({
url: presignedUrl, // the presigned URL
type: 'PUT',
data: 'data to upload into URL',
success: function() { console.log('Uploaded data successfully.'); }
});
Upvotes: 0
Reputation: 13501
I could suggest two approaches:
1- You could generate a pre-signed form in your app, with one credential
See doc: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTForms.html
2- You could use web identity federation and use login with google, facebook or amazon:
See doc: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring-wif.html
Playground: http://aws.typepad.com/aws/2013/08/the-aws-web-identity-federation-playground.html
Upvotes: 0