Reputation: 3874
I'm trying to setup the following for a project
I have a question regarding the AMIs on which the EC2 instances are based. If I want to make some changes to the systems' configuration (say update the libssl package), I see two options:
What would be the best way to do this (avoiding downtime)? Are there some best practise I should stick to?
Thanks
[Edit] I came accross aws-ha-release from aws-missing-tools, which enables to restart all instances from an auto scaling group without any downtime. I guess this could be used in conjunction with packer to force the running instance to use the new AMI. Any feedback on this? I feel like it's a little bit hacky.
Upvotes: 1
Views: 1784
Reputation: 12876
Here are some options:
If you are trying to prevent downtime while deploying new code, take advantage of the fact that an ELB can have multiple autoscale groups/launch configs associated to it.
You can have:
A represents version X of the code, and B represents version X+1 (including any changes to O/S configuration such as libssl)
Now when you want to roll out version X + 1 of your code, simple "bake" a new AMI, configured exactly how you like it, and add the autoscale group B to the elb. Once the autoscale group and its instances are in service, set the max/capacity of the autoscale group A to 0, taking the version X servers out of the ELB. Only your version X + 1 will be running. When new instances come up in the future e.g. if a server fails, they'll be using your X + 1 AMI and have all of it's configuration changes.
Note if your application talks to a database, you will need to ensure that version X of the code and version X + 1 can operate on the same version of the database e.g. if version X + 1 removes a table that version X uses, then you'll get errors from users hitting verison X of your application. #1 works well when there are either no database changes in your code release, or if you've built in backwards compatibility when you roll out a new version of the code.
If all you are wanting to do is update the O/S e.g. patch a version, then you can combine your thought of using a tool like Ansible with the ELB health check.
Note on speed
Option 1 will allow failed instances to be in service faster than Option 2 (since you are not waiting on Ansible to run) at the expense of having to "pre-bake" your AMI.
Option 2 will allow you greater flexibility and speed for patching production servers e.g. if you need to "patch something now" this might be the quickest way. Having something like Ansible running and the ability to patch the O/S (separating that task from the deploying code task) can come with additional advantages, depending on your use case. Providing an agent-less hook into your server's configuration (libraries, user management, etc) is quite powerful, especially in the cloud.
Upvotes: 2
Reputation: 3577
why don't you evaluate to use the userdata field of your Launch Configuration?
All in all it is 16 KB of pure love, built in in your recipe for spawning new machines.
If using Linux, then you use Bash script, if WIndows then you can use Powershell.
No additional tools, all integrated and for free.
P.S. If you need more than 16 KB, just chain a wget of your additional scripts in your core script and then shell it by creating a chain.
Upvotes: -1