Filo Stacks
Filo Stacks

Reputation: 2041

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.

We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.

However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?

What's the best way to approach this?

Thanks!

Upvotes: 2

Views: 233

Answers (1)

Till
Till

Reputation: 22408

We use a similar strategy to deploy, though we're not with Rightscale anymore.

I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.

If you want to do your ssh thing, then I'd go about it the following:

  1. Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
  2. I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
  3. Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.

Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.

Upvotes: 2

Related Questions