Reputation: 495
Let me first explain the context.
Context
Currently we're working with a Jenkins server and use Chef Server for our configuration management. We're moving towards a more continuous deployment environment and this is the workflow that I've been working on:
In (manually triggered) promotions to staging and production environments I do not have internet connections available. The RPM overcomes this problem. The cookbooks are developed using Berkshelf.
The Node.js applications deployed this way do sometimes use compiled native libraries (one project has 3+ dependencies compiling native code).
I know very little about these kinds of deployment processes, but one disadvantage I've heard is that by using RPM's and compiling it only once the compile environment (currently Jenkins itself) should have the same architecture as the deployment environments. The bonus by using RPM's is the artifact remains exactly identical for all environments, it doesn't need recompiling and doesn't pull hundreds of dependencies from everywhere.
Still, the workflow seems a bit elaborate and having to stick to the same architecture doesn't feel very flexible to me.
For our use case we need the following:
For my own projects I've been using Heroku most of the time which doesn't take any effort to setup. The above workflow takes two weeks to create (for the first time).
Questions
The sheer effort to manage all this begs me to question some of the above steps:
Any experiences you might be able to share would be much appreciated!
Upvotes: 4
Views: 995
Reputation: 1178
1) Better to ship all the dependencies with your app and npm rebuild them on the target machine. Or, if you wanna go enterprise, you may rebuild modules on build server and pack into tarball/docker or lxc container/VM image/you name it. There is no silver bullet. Personally I prefer plain LXC containers. But general behaviour: bundle modules with app and rebuild binary modules on target platform.
2) For simple script applications it's better to use tarball or even git clone. No, really you don't need all this power and complexity of system package managers in this case. But if you going to use custom-built nginx or some kind of system-wide library or something like this you better use RPM or DEB and setup appropriate repo for your custom packages.
3) I'm not using Chef, but it is better to separate deployment scripts into standalone repo for any kind of big projects. What I mean is that your deployment code is not code of your application. It's like having two separate apps in one repo. Possible but not a good practice.
4) It's pretty ok. It's ok for scaling cause you may start with just one physical machine and grow as you go (but hey, it just sounds easy. I spent a hell of a lot of time to make my current project scalable). But it is always very good for integration testing. You may spawn whole environment, run integration tests, grab the results and start all over with new tests in fresh environment.
Upvotes: 4