Reputation: 738
We are running a Rails 5.1 site that utilizes the Asset Pipeline to generate hashed (fingerprinted) assets at deployment. To optimize performance, our assets are generated from the server(s) and then cached to AWS Cloudfront when they are requested for the first time.
When we deploy, we do a rolling deployment. We bring up new servers with the new code and terminate the servers with the old code as the new ones come online. At any given time during a deployment, if a request comes in for an asset, any of the servers (new or old) can answer the request since they are all on the same AWS Application Load Balancer.
For example, we have two asset files:
If a request comes in for admin-aac83de85860.js and an older server takes the request, it will not locate the asset, return a 400 and then that response caches. This means all future requests for admin-aac83de85860.js return a 400, even though the new servers have the file.
How do we either get both sets of assets cached in AWS Cloudfront or only direct traffic for the new assets to the new servers being added to the pool?
Upvotes: 7
Views: 343
Reputation: 617
I was able to solve this issue by deploying our assets to S3. After running rails assets:precompile
, copy public/assets
and public/packs
into S3. Then you can setup your cloudfront origin to be the s3 bucket you place your assets in.
When you do a rolling deploy, both admin-2d1d6c00a49c.js
and admin-aac83de85860.js
will be reachable on your CDN.
Upvotes: 1