Reputation: 4196
I have a bucket in S3 that I linked up to a CNAME alias. Let's assume for now that the domain is media.mycompany.com. In the bucket are image files that are all set to private. Yet they are publicly used on my website using URL signing. A signed URL may look like this:
This works fine as it is. I'm using a S3 helper library in PHP to generate such URLs. Here's the identifier of that library:
$Id: S3.php 44 2008-12-23 15:38:38Z don.schonknecht $
I know that it is old, but I'm relying on a lot of methods in this library, so it's not trivial to upgrade, and as said, it works well for me. Here's the relevant method in this library:
public static function getAuthenticatedURL($bucket, $uri, $lifetime, $hostBucket = false, $https = false) {
$expires = time() + $lifetime;
$uri = str_replace('%2F', '/', rawurlencode($uri)); // URI should be encoded (thanks Sean O'Dea)
return sprintf(($https ? 'https' : 'http').'://%s/%s?AWSAccessKeyId=%s&Expires=%u&Signature=%s',
$hostBucket ? $bucket : $bucket.'.s3.amazonaws.com', $uri, self::$__accessKey, $expires,
urlencode(self::__getHash("GET\n\n\n{$expires}\n/{$bucket}/{$uri}")));
}
In my normal, working setup, I'd call this method like this:
$return = $this->s3->getAuthenticatedURL('media.mycompany.com', $dir . '/' . $filename,
$timestamp, true, false);
This returns the correctly signed URL as shared earlier in this post, and all is good.
However, I'd now like to generate HTTPS URLs, and this is where I'm running into issues. Simply adding HTTPs to the current URL (by setting the last param of the method to true) will not work, it will generate a URL like this:
This will obviously not work, since my SSL certificate (which is created from letsencrypt) is not installed on Amazon's domain, and as far as I know, there's no way to do so.
I've learned of an alternative URL format to access the bucket over SSL:
This apparently works for some people, but not for me, from what I know, it's due to having a dot (.) character in my bucket name. I cannot change the bucket name, it would have large consequences in my setup.
Finally, there's this format:
And here I am getting very close. If I take a working non-secure URL, and edit the URL to take on this format, it works. The image is shown.
Now I'd like to have it working in the automated way, from the signing method I showed earlier. I'm calling it like this:
$return = $this->s3->getAuthenticatedURL("s3.amazonaws.com/media.mycompany.com", $dir . '/' . $filename,
$timestamp, true, true);
The change here is the alternative bucket name format, and the last parameter being set to true, indicating HTTPs. This leads to an output like this:
As you can see, it has the same format as the URL I manually crafted to work. But unfortunately, I'm getting signature errors:
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
I'm stuck figuring out why these signatures are incorrect. I tried setting the 4th parameter of the signing method to true and false, but it makes no difference.
What am I missing?
Edit
Based on Michael's answer below I tried to do the simple string replace after the call to the S3 library, which works. Quick and dirty code:
$return = $this->s3->getAuthenticatedURL("media.mycompany.com", $dir . '/' . $filename, $timestamp, true, true);
$return = substr_replace($return, "s3.amazonaws.com/", strpos($return, "media.mycompany.com"), 0);
Upvotes: 3
Views: 4765
Reputation: 384
Spent days going round in circles trying to setup custom CNAME/host for presigned URLs and it seemed impossible.
All forums said it cannot be done, or you have to recode your whole app to use cloudfront instead.
Changing my DNS to point from MYBUCKET.s3-WEBSITE-eu-west-1.amazonaws.com to MYBUCKET.s3-eu-west-1.amazonaws.com fixed it instantly.
Hope this helps others.
Working code:
function get_objectURL($key) {
// Instantiate the client.
$this->s3 = S3Client::factory(array(
'credentials' => array(
'key' => s3_key,
'secret' => s3_secret,
),
'region' => 'eu-west-1',
'version' => 'latest',
'endpoint' => 'https://example.com',
'bucket_endpoint' => true,
'signature_version' => 'v4'
));
$cmd = $this->s3->getCommand('GetObject', [
'Bucket' => s3_bucket,
'Key' => $key
]);
try {
$request = $this->s3->createPresignedRequest($cmd, '+5 minutes');
// Get the actual presigned-url
$presignedUrl = (string)$request->getUri();
return $presignedUrl;
} catch (S3Exception $e) {
return $e->getMessage() . "\n";
}
}
Upvotes: 0
Reputation: 179084
The change here is the alternative bucket name format
Almost. This library doesn't quite appear to have what you need in order to do what you are trying to do.
For Signature Version 2 (which is what you're using), your easiest workaround will be to take the signed URL with https://bucket.s3.amazonaws.com/path
and just doing a string replace to https://s3.amazonaws.com/bucket/path
.¹ This works because the signatures are equivalent in V2. It wouldn't work for Signature V4, but you aren't using that.
That, or you need to rewrite the code in the supporting library to handle this case with another option for path-style URLs.
The "hostbucket" option seems to assume a CNAME or Alias named after the bucket is pointing to the S3 endpoint, which won't work with HTTPS. Setting this option to true is actually causing the library to sign a URL for the bucket named s3.amazonaws.com/media.example.com
, which is why the signature doesn't match.
If you wanted to hide the "S3" from the URL and use your own SSL certificate, this can be done by using CloudFront in front of S3. With CloudFront, you can use your own cert, and point it to any bucket, regardless of whether the bucket name matches the original hostname. However, CloudFront uses a very differential algorithm for signed URLs, so you'd need code to support that. One advantage of CloudFront signed URLs -- which may or may not be useful to you -- is that you can generate a signed URL that only works from the specific IP address you include in the signing policy.
It's also possible to pass-through signed S3 URLs with special configuration of CloudFront (configure the bucket as a custom origin, not an S3 origin, and forward the query string to the origin) but this defeats all caching in CloudFront, so it's a little bit counterproductive... but it would work.
¹ Note that you have to use the regional endpoint when you rewrite like this, unless your bucket is in us-east-1 (a.k.a. US Standard) so the hostname would be s3-us-west-2.amazonaws.com
for buckets in us-west-2, for example. For US Standard, either s3.amazonaws.com
or s3-external-1.amazonaws.com
can be used with https URLs.
Upvotes: 2