Akshay Khetrapal
Akshay Khetrapal

Reputation: 2676

Amazon S3 static site serves old contents

My S3 bucket hosts a static website. I do not have cloudfront set up.

I recently updated the files in my S3 bucket. While the files got updated, I confirmed manually in the bucket. It still serves an older version of the files. Is there some sort of caching or versioning that happens on Static websites hosted on S3?

I haven't been able to find any solution on SO so far. Note: Cloudfront is NOT enabled.

Upvotes: 12

Views: 8631

Answers (3)

XYD
XYD

Reputation: 693

Solution is here:

But you need to use CloundFront. like @Frederic Henri said, you cannot do much in S3 bucket itself, but with CloudFront, you can invalidate it.

CloudFront will have cached that file on an edge location for 24 hours which is the default TTL (time to live), and will continue to return that file for 24 hours. Then after the 24 hours are over, and a request is made for that file, CloudFront will check the origin and see if the file has been updated in the S3 bucket. If is has been updated, CloudFront will then serve the new updated version of the object. If it has not been updated, then CloudFront will continue to serve the original version of the object.

However where you update the file in the origin and wish for it to be served immediately via your website, then what needs to be done is a CloudFront invalidation. An invalidation wipes the file(s) from the CloudFront cache, so when a request is made to CloudFront, it will see that there are no files on the cache, will then check the origin and will serve the new updated file in the origin. Running an invalidation is recommended each time files are updated in the origin.

To run an invalidation:

  • click on the following link for CloudFront console -- https://console.aws.amazon.com/cloudfront/home?region=eu-west-1#
  • open the distribution in question
  • click on the 'Invalidations' tab to the right of all the tabs
  • click on 'Create Invalidation'
  • on the popup, it will ask for the path. You can enter /* to invalidate every object from the cache, or enter the exact path tot he file, such as /images/picture.jpg
  • finally click on 'Invalidate'
  • this typically will be completed within 2/3 minutes
  • then once the invalidation is complete, when you request the object again through CloudFront, CloudFront will check the origin and return the updated file.

Upvotes: 11

43matthew
43matthew

Reputation: 982

It sounds like Akshay tried uploading with a new filename and it worked.

I just tried the same (I was having the same problem), and it resolved the file not being available for me.

  • Do a push of index.html
  • index.html not updated
  • mv index.html index-new.html
  • Do a push of new-index.htlml

After this, index-html was immediately available.

That's kind of shite - I can't share one link to my website if I want to be sure that the recipient will see the latest version? I need to keep changing the filename and re-sharing the new link.

Upvotes: 3

Frederic Henri
Frederic Henri

Reputation: 53763

Is there some sort of caching or versioning that happens on Static websites hosted on S3?

Amazon S3 buckets provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES

what does this mean ?

If you create a new object in s3, you will be able to immediately access your object - however in case you do an update of an existing object, you will 'eventually' get the newest version of you object from s3, so s3 might still deliver you the previous version of the object.

I believe that starting some time ago, read-after-write consistency is also available for update in the US Standard region.

how much do you need to wait ? well it depends, Amazon does not provide much information about this.

what you can do ? no much. If you want to make sure you do not have any issue with your S3 bucket delivering the files, upload a new file in your bucket, you will be able to access it immediately

Upvotes: 8

Related Questions