Marcin Adamczyk
Marcin Adamczyk

Reputation: 509

Amazon S3 and Cloudfront don't gzip on-fly

I want to use new feature in Cloudfront, which allows to gzip files on-fly using Accept-Encoding: gzip header. I set up my CDN distribution, turned on "Compress Objects Automatically", whitelisted headers: Origin, Accept-Control-Request-Headers and Accept-Control-Request-Method (I'm using AngularJS, I need it for OPTIONS method). I dont have any CORS set on my S3 bucket.

As stated in their docs, it should start working when I add Accept-Encoding: gzip header to the request. However, I'm still getting raw file.

Response Headers

Accept-Ranges:bytes
Age:65505
Cache-Control:public, max-age=31557600
Connection:keep-alive
Content-Length:408016
Content-Type:text/css
Date:Mon, 21 Mar 2016 16:00:36 GMT
ETag:"5a04faf838d5165f24ebcba54eb5fbac"
Expires:Tue, 21 Mar 2017 21:59:21 GMT
Last-Modified:Mon, 21 Mar 2016 15:59:22 GMT
Server:AmazonS3
Via:1.1 0e6067b46ed4b3e688f898d03e5c1c67.cloudfront.net (CloudFront)
X-Amz-Cf-Id:gKYTTq0cIcUvHTtlrdMig8D1R2ZVdea4EnflV0-IxhtaxgRvLYj6LQ==
X-Cache:Hit from cloudfront

Request Headers

Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip, deflate, sdch
Accept-Language:pl,en-US;q=0.8,en;q=0.6
Cache-Control:max-age=0
Connection:keep-alive
Host: XXX.cloudfront.net
Referer: XXX
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36

My configuration is:

Upvotes: 2

Views: 1478

Answers (1)

Michael - sqlbot
Michael - sqlbot

Reputation: 179074

Notice these two response headers.

Age: 65505
X-Cache: Hit from cloudfront

This object was cached by a prior request, 65,505 seconds (≅ 18 hours) before you requested it this particular time.

Once CloudFront has cached an object at a particular edge, if you later configure the relevant cache behavior to enable on-the-fly compression, CloudFront won't go back and re-compress objects already in its cache. It will continue to serve the original version of the object until it's evicted.

If this 18 hour interval is longer ago than you enabled compression on the distribution, that is the most likely explanation for what you are seeing.

CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code 304, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

Evict everything from your distribution's cache by submitting an invalidation request for the path * (to cover everything) or just this particular /path or /path*, etc. Within a few minutes, all cached content for your distribution (or for the specific path match, if you don't specify * everything) will be evicted (wait for the invalidation to show that it's complete), and you should see compression working on subsequent requests.

Keep an eye on the Age: (how long CloudFront has had a copy of the particular response) and once it drops off and then resets, I would venture a guess that you'd see what you expect.

If this doesn't resolve the issue, there is another possibility, but I'd expect this to be a fairly unusual occurrence:

In rare cases, when a CloudFront edge location is unusually busy, some files might not be compressed.

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

Upvotes: 5

Related Questions