Reputation: 5808
I have the Amazon CloudFront gzip feature enabled: "Compress Objects Automatically".
This is happening for all the files in my CloudFront, while other CSS/JS files are loading as gzip (Double checked that my server request headers are accepting gzip files Accept-Encoding: gzip
).
I am really lost trying to figure this out because all tutorials and google search results lead to the same explanation of how to check the radio button "Compress Objects Automatically" - which clearly doesn't help.
I thought maybe I can't gzip the files because they are too small to compress - But following google speed test saying clearly that I can compress these files with gzip.
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/bootstrap.min.css could save 100.3KiB (83% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/style.css could save 60.5KiB (80% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/shop/css/jquery.range.css could save 4.6KiB (83% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/font-awesome.min.css could save 21.9KiB (77% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/responsive.css could save 20KiB (80% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/general.min.js?ver=9.70 could save 232.9KiB (72% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/magnific-popup.css could save 5.7KiB (75% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/bootstrap.min.js could save 26.4KiB (73% reduction).
Compressing https://Cloudfront.cloudfront.net/…ve/static/plugins/jquery.validate.min.js could save 14KiB (67% reduction).
Compressing https://Cloudfront.cloudfront.net/…tic/plugins/jquery.magnific-popup.min.js could save 13.2KiB (63% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/plugins/jquery.range.min.js could save 3.9KiB (66% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/voting/jquery.cookie.js could save 1.2KiB (53% reduction).
What part am I missing that will help me gzip the files using Cloudfront?
this is how my response headers look like:
Accept-Ranges:bytes
Cache-Control:max-age=0
Connection:keep-alive
Content-Length:122540
Content-Type:text/css
Date:Sun, 23 Apr 2017 13:14:07 GMT
ETag:"2cb56af0a65d6ac432b906d085183457"
Last-Modified:Tue, 02 Aug 2016 08:49:54 GMT
Server:AmazonS3
Via:1.1 2cb56af0a65d6ac432b906d085183457.cloudfront.net (CloudFront)
X-Amz-Cf-Id:eCPcSDedADnqDZMlMbFjj08asdBSn7_lfR0imlXAT181Y8qRMtSZASDF27AiSTK8PDQ==
x-amz-meta-s3cmd-attrs:uid:123/gname:ubuntu/uname:ubuntu/gid:666/mode:666/mtime:666/atime:666/md5:2cb56af0a65d6ac432b906d085183457/ctime:666
X-Cache:RefreshHit from cloudfront
I understood the concept of returns on 200 and 304 - when deleting the browser cache it always shows a 200 response.
So there is some caching from Cloudfront? I added my bootstrap3.min.css file to the "Invalidation" table - didn't work.
Made sure the file is set to compression.
Added this to my website.com.conf file to enable gzip and display content-length
header:
DeflateBufferSize 8096
SetOutputFilter DEFLATE
DeflateCompressionLevel 9
Tried to remove DeflateBufferSize 8096
from my .conf
file and added <AllowedHeader>Content-Length</AllowedHeader>
to the "CORS Configuration" - I do get the Content-Length
right - but still not GZIPed. (following CloudFront with S3 website as origin is not serving gzipped files )
This is what I currently get:
Request URL:https://abc.cloudfront.net/live/static/rcss/bootstrap3.min.css
Request Method:GET
Status Code:200 OK
Remote Address:77.77.77.77:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
Accept-Ranges:bytes
Age:1479
Connection:keep-alive
Content-Length:122555
Content-Type:text/css
Date:Wed, 26 Apr 2017 08:48:34 GMT
ETag:"83527e410cd3fff5bd1e4aab253910b2"
Last-Modified:Wed, 26 Apr 2017 08:43:05 GMT
Server:AmazonS3
Via:1.1 5fc044210ebc4ac6efddab8b0bf5a686.cloudfront.net (CloudFront)
X-Amz-Cf-Id:3ZBgDY0c1WV_Pc0o_Bjwa5cQ9D9T-Cr30QDxd_GvD30iQ8W1ImReQIH==
X-Cache:Hit from cloudfront
Request Headers
Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip, deflate, sdch, br
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Host:abc.cloudfront.net
Pragma:no-cache
Referer:https://example.com/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36
Following: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
If you configure CloudFront to compress content, CloudFront removes the ETag response header from the files that it compresses. When the ETag header is present, CloudFront and your origin can use it to determine whether the version of a file in a CloudFront edge cache is identical to the version on the origin server. However, after compression the two versions are no longer identical.
I get the same Etag meaning - the served css file doesn't go through any compression.
Thinking maybe I didn't set the compression right for this specific file - This is now set to
*/bootstrap3.min.css
(since it's inside a directory).
and I had this set before to
bootstrap3.min.css
both don't work.
My URL is: https://abc.cloudfront.net/live/static/rcss/bootstrap3.min.css Following this, I edited my invalidation part to:
/live/static/rcss/bootstrap3.min.css
/static/rcss/bootstrap3.min.css
/rcss/bootstrap3.min.css
/bootstrap3.min.css
Can this be my actual problem?
Upvotes: 5
Views: 3153
Reputation: 265
it might be because S3 is not sending the required Content-Length
header response.
Check this answer for more details: https://stackoverflow.com/a/42448222/4005566
Upvotes: 2
Reputation: 179074
X-Cache: RefreshHit from cloudfront
This means CloudFront checked the origin with a conditional request such as If-Modified-Since
and the response was 304 Not Modified
, indicating that the content at the origin server (S3) is unchanged from when CloudFront initially cached the resource, so it served the copy from cache.
...which was probably cached before you enabled "Compress Objects Automatically."
If you think about it, it would be far more efficient for CloudFront only to compress objects as they come in from the origin, not as they go out to the viewer, so files it already has would never get compressed.
This is documented:
CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code
304
, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
So a cache invalidation of *
is in order, to clear out the uncompressed versions.
But wait... higher up on the same page, there seems to be conflicting information:
Note
If CloudFront has an uncompressed version of the file in the cache, it still forwards a request to the origin.
Given the information, above, that seems to be a discrepancy. But, I believe the issue here is one of some unspoken assumptions. This information most likely would only be applicable to uncompressed copies that were cached in response to a viewer that did not send Accept-Encoding: gzip
, in which case, the correct behavior on the part of CloudFront would be to cache the compressed and uncompressed responses independently, and to contact the origin if no compressed copy of an object was available and the viewer had indicates that it could support objects compressed with gzip, regardless of whether an uncompressed copy had been stored as a result of a request from a browser that did not advertise gzip support.
Or, it can be interpreted to mean that CloudFront did still send a request, but since the response was 304
, it served the cached copy in spite of it being uncompressed.
Invalidate your cache, then wait for the invalidation to show that it's complete, then try again. This should be all that is needed to correct this behavior.
Upvotes: 7