nickcoxdotme
nickcoxdotme

Reputation: 6697

Amazon S3 downloads index.html instead of serving

I've set up Amazon S3 to serve my static site, speakeasylinguistics.com. All of the DNS stuff seems to be working okay, because dig +recurse +trace www.speakeasylinguistics.com outputs the correct DNS info.

But when you visit the site in a browser using the endpoint, the index.html page downloads, instead of being served. How do I fix this?

I've tried Chrome, Safari, FF. It happens on all of them. I used Amazon's walkthrough on hosting a custom domain to a T.

Upvotes: 88

Views: 52970

Answers (13)

khayali oussama
khayali oussama

Reputation: 1

In my case It was an aws region issue

As mentioned in this docs :

Depending on your Region, your Amazon S3 website endpoint follows one of these two formats.

s3-website dash (-) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com

s3-website dot (.) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com

So in my case I have created my bucket in the eu-west-1, and my endpoint should be : http://bucket-name.s3-website.Region.amazonaws.com (s3-website dot)

But the endpoint generated in AWS Console is : http://bucket-name.s3-website-Region.amazonaws.com (s3-website dash)

Replacing dash with dot solved the problem in my case

Upvotes: 0

BoRRis
BoRRis

Reputation: 1041

For the Terraform > 1.6 versions using the content-type as said above:

resource "aws_s3_bucket_object" "files" {
  bucket = aws_s3_bucket.resume_bucket.bucket

  for_each = fileset("C:\\Users\\terraform-aws\\${each.value}", "**")
  key=each.key
  # The source directory on local machine.
  source = "C:\\Users\\terraform-aws\\${each.value}"
  content_type = each.value
  etag = filemd5("C:\\Users\\terraform-aws\\${each.value}")

}

Upvotes: 0

artronics
artronics

Reputation: 1496

Here is a solution for uploading a directory (including subdirectories) to s3 while setting the content-type.

locals {
  mime_types = {
    ".html" = "text/html"
    ".css" = "text/css"
    ".js" = "application/javascript"
    ".ico" = "image/vnd.microsoft.icon"
    ".jpeg" = "image/jpeg"
    ".png" = "image/png"
    ".svg" = "image/svg+xml"
  }
}
resource "aws_s3_object" "upload_assets" {
  bucket = aws_s3_bucket.www_bucket.bucket
  for_each = fileset(var.build_path, "**")
  key = each.value
  source = "${var.build_path}/${each.value}"
  content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)
  etag = filemd5("${var.build_path}/${each.value}")
}

var.build_path is the directory containing your assets. This line:

content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)

gets the file extension by matching the regex and then use provided locals map to lookup the correct content_type

Credit: https://engineering.statefarm.com/blog/terraform-s3-upload-with-mime/

Upvotes: 1

Josh Weston
Josh Weston

Reputation: 1890

I had the same problem when uploading to an S3 static site from NodeJS. As others have mentioned, the issue was caused by missing the content-type when uploading the file. When using the web interface, the content-type is automatically applied for you; however, when manually uploading you will need to specify it. List of S3 Content Types.

In NodeJS, you can attach the content type like so:

const { extname } = require('path');
const { createReadStream } = require('fs');

// add more types as needed
const getMimeType = ext => {
    switch (ext) {
        case '.js':
            return 'application/javascript';
        case '.html':
            return 'text/html';
        case '.txt':
            return 'text/plain';
        case '.json':
            return 'application/json';
        case '.ico':
            return 'image/x-icon';
        case '.svg':
            return 'image/svg+xml';
        case '.css':
            return 'text/css'
        case '.jpg':
        case '.jpeg':
            return 'image/jpeg';
        case '.png':
            return 'image/png';
        case '.webp':
            return 'image/webp';
        case '.map':
            return 'binary/octet-stream'
        default:
            return 'application/octet-stream'    
    }
};

(async() => {
    const file = './index.html';
    const params = {
        Bucket: 'myBucket',
        Key: file,
        Body: createReadStream(file),
        ContentType: getMimeType(extname(file)),
    };
    await s3.putObject(params).promise();
})();

Upvotes: 7

James G
James G

Reputation: 2914

If you are using Hashicorp Terraform you can specify the content-type on an aws_s3_bucket_object as follows

resource "aws_s3_bucket_object" "index" {
  bucket = "yourbucketnamehere"
  key = "index.html"
  content = "<h1>Hello, world</h1>"

  content_type = "text/html"
}

This should serve your content appropriately in the browser.

Edit 24/05/22: As mentioned in the comments on this answer, Terraform now has a module to help with uploading files and setting their content-type attribute correctly

Upvotes: 70

Fernando Taboada
Fernando Taboada

Reputation: 61

I've been through the same issue and I have resolved this way. At S3 Bucket, click o index.html checkbox, click con Actions tab, Edit Metadata, and you will notice that in Metadata options says "Type: System defined, Key: Content-Type, Value: binary/octet-stream". Change Value and put "html" and save the changes. Then click at index.html, "Open" button. That worked for me.

Upvotes: 1

rpf3
rpf3

Reputation: 691

I recently came across this issue and the root cause seems to be that object versioning was enabled. After disabling versioning on the bucket the index HTML was served as expected.

Upvotes: 2

CPak
CPak

Reputation: 13581

For anyone else facing this issue, there's a typo in the URL you can find under Properties > Static website hosting. For instance, the URL provided is

http://{bucket}.s3-website-{region}.amazonaws.com

but it should be

http://{bucket}.s3-website.{region}.amazonaws.com

Note the . between website and region.

Upvotes: 15

codaddict
codaddict

Reputation: 317

I have recently had the same issue popping up, the problem was a change of behavior of CloudFront & S3 Origin, If your S3Bucket is configured to serve a static website, you need to change your origin to be the HTTPS:// endpoint instead of picking the S3 origin from the pulldown, if you are using terraform, your origin should be aws_s3_bucket.var.website_endpoint instead of aws_s3_bucket.var.bucket_domain_name

Refer to the AWS documentation here

Upvotes: 2

dannisis
dannisis

Reputation: 462

if you guys are trying to upload it with Boto3 and python 3.7 or above try with

s3 = boto3.client('s3')
S3.upload_file(local_file,bucket,S3_file,ExtraArgs={'ContentType':'text/html'})

for update Content-Type

Upvotes: 14

e-israel
e-israel

Reputation: 646

If you are using AWS S3 Bitbucket Pipelines Python, then add the parameter content_type as follow:

s3_upload.py

def upload_to_s3(bucket, artefact, bucket_key, content_type):
...

def main():
...
    parser.add_argument("content_type", help="Content Type File")
...

if not upload_to_s3(args.bucket, args.artefact, args.bucket_key, args.content_type):

and modify bitbucket-pipelines.yml as follow:

...
- python s3_upload.py bucket_name file key content_type 
...

Where content_type param can be one of following: MIME types (IANA media types)

Upvotes: 0

Brombomb
Brombomb

Reputation: 7076

If you are doing this programmatically you can set the ContentType and/or ContentDisposition params in your upload.

[PHP Example]

      $output = $s3->putObject(array(
          'Bucket' => $bucket,
          'Key' => md5($share). '.html',
          'ContentType' => 'text/html',
          'Body' => $share,
      ));

putObject Docs

Upvotes: 34

dc5
dc5

Reputation: 12441

Running curl -I against the url you posted gives the following result:

curl -I http://speakeasylinguistics.com.s3-website-us-east-1.amazonaws.com/
HTTP/1.1 200 OK
x-amz-id-2: DmfUpbglWQ/evhF3pTiXYf6c+gIE8j0F6mw7VmATOpfc29V5tb5YTeojC68jE7Rd
x-amz-request-id: E233603809AF9956
Date: Sun, 18 Aug 2013 07:58:55 GMT
Content-Disposition: attachment
Last-Modified: Sun, 18 Aug 2013 07:05:20 GMT
ETag: "eacded76ceb4831aaeae2805c892fa1c"
Content-Type: text/html
Content-Length: 2585
Server: AmazonS3

This line is the culprit:

Content-Disposition: attachment

If you are using the AWS console, I believe this can be changed by selecting the file in S3 and modifying its meta data by removing this property.

Upvotes: 64

Related Questions