standac
standac

Reputation: 1097

Is it possible to do dynamic static mass hosting by subdomain on S3?

Let's say I need to host 1000 static web sites. Each one of them being a sub-domain of a parent domain:

foo.parent.com
bar.parent.com
baz.parent.com
...

Since there is a limit of 100 buckets per AWS account, each static content would be in a folder of the parent bucket:

https://s3.amazonaws.com/parent.com/foo/index.html
https://s3.amazonaws.com/parent.com/bar/index.html
https://s3.amazonaws.com/parent.com/baz/index.html
...

Is there a way to point each sub domain to the correct folder ? Maybe using Route 53 ?

Using Apache it's possible to auto-rewrite all sub-domains to a desired folder doing something like this:

RewriteCond %{HTTP_HOST} ^([^.]+)\.parent.com$ [NC]
RewriteRule ^(.*) folder/%1/

Upvotes: 0

Views: 1134

Answers (2)

Michael - sqlbot
Michael - sqlbot

Reputation: 179254

Stop reading here and instead see Serving a multitude of static sites from a wildcard domain in AWS which describes how this can be done using a single S3 bucket, a single CloudFront distribution, and a Lambda@Edge Origin Request trigger.


Update: Not long after this answer was originally posted, AWS relaxed the hard limit of 100 buckets per account, converting it instead to a default limit, which can be increased by describing your use case to AWS support.

Bucket Limit Increase: You can now increase your Amazon S3 bucket limit per AWS account. All AWS accounts have a default bucket limit of 100 buckets, and starting today you can now request additional buckets by visiting AWS Service Limits.

https://aws.amazon.com/about-aws/whats-new/2015/08/amazon-s3-introduces-new-usability-enhancements/


Using S3, by itself, no.

Using S3, with rewrite help from Route 53, no -- DNS doesn't rewrite paths. Ever.

Using S3 in conjunction with a proxy that rewrites paths based on the incoming Host: header... well, yes. Similar to your Apache configuration example, you could configure Apache to rewrite the URL and then [P] proxy (not redirect -- proxy) the request to S3, with the path rewritten. The same could be done with Nginx or HAProxy or several other products... and, if the proxy, or proxies, are in the same region as the bucket, there aren't data transfer charges between EC2 and S3. I serve content all day long with "Hostname A" fetching content out of "Bucket Name B" with HAProxy rewriting the incoming Host: header to be what S3 expects in order to serve from the intended bucket. Bonus: SSL on my domains with my certs.

Or... if you're not feeling quite that adventuresome, there's CloudFront. This takes away the need for the hostname to match the bucket, and allows you some additional path-related flexibility.

When you specify the origin for a CloudFront distribution - the Amazon S3 bucket or the custom origin where you store the original version of content - you can now specify a directory path in addition to a domain name.

http://aws.amazon.com/about-aws/whats-new/2014/12/16/amazon-cloudfront-now-allows-directory-path-as-origin-name/

What this means for you, is provisioning a CloudFront distribution for each web site, configuring it to use the appropriate S3 bucket, and path within that bucket. (Which, of course, you can automate).

AWS doesn't charge for the distribution itself, so this one seems like a bit of a no-brainer. There's a limit of 200 CloudFront distributions per AWS account, but unlike the 100 bucket limit in S3, this limit appears to be a negotiable default limit, as the S3 limit now is also.

Upvotes: 1

E.J. Brennan
E.J. Brennan

Reputation: 46879

Sorry, not currently possible. A domain record needs to map to a bucket with the exact same name, and there is a 100 bucket limit.

You would need to use multiple AWS accounts to get 1000 buckets, and I am not sure if they frown on that or not.

Probably better to explore another method such as using an EC2 instance to host the 1000 websites, and serving as much of the css/js/images etc from s3 to offload the traffic.

Upvotes: 1

Related Questions