voithos
voithos

Reputation: 70552

Automated web deployment on multiple servers with Mercurial

I've been looking at some workflows for Mercurial recently, as we start using it for our web development. We need an automated way to propagate changes that are pushed to the testing and live instances to multiple endpoints. Here's a diagram of the idea:

         +-------+
         |Dev    |
         |       |
         +-------+
             |  Push
             +--------+
                      |
                      V
+-------+   Push  +-------+
|Live   |<--------|Test   |
|server |         |server |
+-------+         +-------+
    |    +-------+    |    +-------+
    +--->|Live 1 |    +--->|Test 1 |
    |    |       |    |    |       |
    |    +-------+    |    +-------+
    |                 |
    |    +-------+    |    +-------+
    +--->|Live 2 |    +--->|Test 2 |
    |    |       |    |    |       |
    |    +-------+    |    +-------+
    |                 |
    |    +-------+    |    +-------+
    +--->|Live 3 |    +--->|Test 3 |
         |       |         |       |
         +-------+         +-------+

Basically, the idea is that all that we as the developers would have to do is, once the development has reached a stable level, to issue the push command (which doesn't necessarily have to just be a hg push) to the test server, and from there it would automatically propagate out. Then, once testing is done, we'd push it from test to live (or, if it would be easier, we could push from dev to live), and that would also propagate out to each of the different instances.

It would be nice if we could add new test and live instances fairly easily (e.g. maybe if the IPs were stored in a database that could be read by a script, etc...).

What would be the best way to accomplish this? I know about Mercurial hooks. Maybe an in-process script that the hook would run? I've also looked into Fabric, would that be a good option?

Also, what kind of support software would each of the endpoints need? Would it be easiest if a Mercurial repository existed on each server? Would SSH access be beneficial? Etc...

Upvotes: 4

Views: 1377

Answers (1)

overthink
overthink

Reputation: 24443

I've done something like this using Mercurial, Fabric, and Jenkins:

   +-------+
   | Devs  |
   +-------+
       | hg push
       V
   +-------+
   |  hg   |  "central" (by convention) hg repo
   +-------+\
    |        \
    |         +--------------+
    | Jenkins job            | Jenkins job
    | pull stable            | pulls test
    | branch & compile       | branch & compile
    |       +-------+        |
    |  +----|Jenkins|-----+  |
    |  |    +-------+     |  |
    V  |                  |  V
   +-------+          +-------+
   | "live"|          | "test"|  shared workspaces ("live", "test")
   +-------+          +-------+
     | Jenkins job         | Jenkins job     <-- jobs triggered
     | calls fabric        | calls fabric        manually in
     |    +-------+        |    +-------+        Jenkins UI
     |--> | live1 |        |--> | test1 |
 ssh |    +-------+    ssh |    +-------+
     |    +-------+        |    +-------+
     |--> | live2 |        |--> | test2 |
     |    +-------+        |    +-------+
     |    ...              |    ...
     |    +-------+        |    +-------+
     +--> | liveN |        +--> | testN |
          +-------+             +-------+
  • I don't have a repo on each web server; I use fabric to deploy only what is necessary.
  • I have a single fabfile.py (in the repo) that contains all the deploy logic
  • The set of servers (IPs) to deploy to is given as a command line arg to fabric (it's part of the Jenkins job config)
  • I use Jenkins shared workspaces so I can separate the tasks of pulling and compiling from actually deploying (so I can re-deploy the same code if necessary)
  • If you can get away with a single Jenkins job that pulls-compiles-deploys, you'll be happier. The shared workspaces thing is a hack I have to use for my setup, and has downsides.

To directly address some of your questions:

  • Devs working on the test branch can push at their leisure, and collectively decide when to run the Jenkins job to update the test environment
  • When test is happy, merge it to stable and run the Jenkins job to update the live environment
  • Adding a new web box is just a matter of adding another IP to the command line used to invoke fabric (i.e. in the config for the Jenkins job)
  • All servers will need ssh access from the Jenkins box

Upvotes: 3

Related Questions