damitj07
damitj07

Reputation: 2899

Deployment(CI-CD) pipeline for serverless application

I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations. If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.

And after doing all above I would be done with my pipeline and happy about it.

Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.

Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?

All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?

Upvotes: 4

Views: 569

Answers (1)

ceilfors
ceilfors

Reputation: 2727

Option 1

Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.

sls deploy -s dev
sls deploy -s prod

There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.

Option 2

If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:

sls package -s dev --package build/dev
sls package -s prod --package build/prod

Archive as necessary, then to deploy:

sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod

Option 3

This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:

sls package -s dev --package build

Then to deploy:

# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment    
sls deploy -s dev --package build

# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build

If you have the following resource in build/cloudformation-template-update-stack.json for example:

"MyBucket": {
  "Type": "AWS::S3::Bucket",
  "Properties": {
    "BucketName": "myapp-dev-bucket"
  }
},

The result of the script you execute before sls deploy should modify the CF resource to:

"MyBucket": {
  "Type": "AWS::S3::Bucket",
  "Properties": {
    "BucketName": "myapp-prod-bucket"
  }
},

This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.

Upvotes: 1

Related Questions