Scotty
Scotty

Reputation: 2675

Where should i run my grunt build step when building my docker image for staging and production environments?

I'm really struggling to figure out where i should put my grunt build step when building my docker image and deploying to dockerhub.

My workflow at the moment is as follows:

I do the same workflow as above, when merging to master, and a production image is created instead.

It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?

I've seen quite a lot of people including the grunt/gulp build step in their dockerfiles, but that doesn't feel right either as all the devDependencies, and bower_components will then be in the image along with the built code.

What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile? I'm also after the most efficient way to create my docker image for staging and production.

Below is my circleCI.yml file, followed by my Dockerfile.

circle.yml:

machine:
  node:
    version: 4.2.1
  # Set the timezeone - any value from /usr/share/zoneinfo/ is valid here
  timezone:
    Europe/London
  services:
    - docker
  pre:
    - sudo curl -L -o /usr/bin/docker 'http://s3-external-1.amazonaws.com/circle-downloads/docker-1.8.2-circleci'; sudo chmod 0755 /usr/bin/docker; true

dependencies:
  pre:
    - docker --version
    - sudo pip install -U docker-compose==1.4.2
    - sudo pip install tutum
  override:
    - npm install:
        pwd: node
  post:
    - npm run bower_install:
        pwd: node
    - npm run grunt_build:
        pwd: node

test:
  override:
    - cd node && npm run test

deployment:
  staging:
    branch: staging
    commands:
      - docker-compose -f docker-compose.production.yml build node
      # - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
      - tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
      - docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_stage:latest
      - docker push tutum.co/${DOCKER_USER}/dh_stage:latest

  master:
      branch: master
      commands:
        - docker-compose -f docker-compose.production.yml build node
        # - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
        - tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
        - docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_prod:latest
        - docker push tutum.co/${DOCKER_USER}/dh_prod:latest

Dockerfile:

FROM node:4.2

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY package.json /usr/src/app/
RUN npm install --production

COPY . /usr/src/app

#
#
# Commented the following steps out, as these
# now run on CircleCI before the image is built.
# (Whether that's right, or not, i'm not sure.)
#
# Install bower
# RUN npm install -g bower # grunt-cli
#
# WORKDIR src/app
# RUN bower install --allow-root
#

# Expose port
EXPOSE 3000

# Run app using nodemon
CMD ["npm", "start"]

Upvotes: 1

Views: 1795

Answers (2)

Abhijeet Kamble
Abhijeet Kamble

Reputation: 3201

Your circle ci should download all the dependencies and then create docker image from that downloaded packages. All testing is passed with the specified dependencies and should be carry forwarded to production. Once. The image is pushed to docker hub with all dependencies and tumtum will deploy the same to your production and as the dependencies are already downloaded it will take seconds to create containers.

Answering to your second query of building the same image. I would suggest to deploy the same image to production. This will guarantee you that what worked great on staging is also working the same on production.

Upvotes: 0

Peter Lyons
Peter Lyons

Reputation: 146124

What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile?

It's better to run the build steps themselves outside of docker. Thus the same steps work for local development, non-docker deployment, etc. Keep your coupling to docker itself loose when you can. Thus build your artifacts with regular build tools and scripts and simply ADD built files to your docker image via your Dockerfile.

It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?

I would recommend instead using exactly the image you have already built and tested on stage in production. Once you rebuild the image, you become vulnerable to discrepancies breaking your production image even though your stage image worked OK. At this point neither docker nor npm can deliver strictly reproducible builds across time, thus once it's built and tested gold, it's gold and goes to production bit-for-bit identical.

Upvotes: 1

Related Questions