Darksymphony
Darksymphony

Reputation: 2713

Dockerfile and Jenkinsfile - COPY repository

I am trying to clone all my git repos in Jenkinsfile:

stage('Clone git') {
                          steps {sh 'mkdir repo1'
                                 sh 'cd repo1'
                                 dir ('repo1') {
                                 git ([url : 'https://github.../repo1.git', branch : 'develop', credentialsId : '***'])}

                                 sh 'mkdir repo2'
                                 sh 'cd repo2'
                                 dir ('repo2') {
                                 git ([url : 'https://github.../repo2.git', branch : 'develop', credentialsId : '***'])}
                      }
                  }

 stage('Build Docker') {
                      steps {sh 'cd ..'
                             sh 'docker build -t myimage .'
                             sh 'docker run myimage'
                      }

and then in my Dockerfile I have this:

FROM node:12.2.0-alpine
COPY . /myapp
WORKDIR /myapp
RUN apk update

FROM gradle:6.3-jdk11 AS build
COPY --chown=gradle:gradle . /repo1
WORKDIR /repo1
RUN gradle build --no-daemon

WORKDIR /repo2
RUN npm install
RUN ng serve

Please help me understand, if I am doing it the right way, specially the COPY command.

I think, it works this way: First it will clone git repos to repo1 and repo2 folders.

Then in Dockerfile it will create a layout inside myapp folder, it will copy both repo folders under the myapp folder.

The problem is, I need to run both services - in repo1 it is backend, gradle needs to be build, that's why I copy the gradle to repo1 folder, not sure if I should use myapp/repo1 here in COPY.

And repo2 is frontend, I think there is no need to COPY anything, just to select the WORKDIR repo2 and run the server.

So the question is, is it enough to COPY . /myapp in the first step, will it copy every repo folders there, or it just copies the docker image to myapp folder and I need to copy all the created folders separately?

Upvotes: 3

Views: 1615

Answers (2)

Noam Yizraeli
Noam Yizraeli

Reputation: 5454

When using a single Dockerfile only one docker image can be created, thus if you want both services to run you need to build and run them separately (probably with -d flag for detached mode).

Using multiple applications in the same container - docker docs:

A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile. It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

you can create a script to run at the container startup such as this:

#!/bin/bash

# Start the first process
./my_first_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_first_process: $status"
  exit $status
fi

# Start the second process
./my_second_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_second_process: $status"
  exit $status
fi

# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds

while sleep 60; do
  ps aux |grep my_first_process |grep -q -v grep
  PROCESS_1_STATUS=$?
  ps aux |grep my_second_process |grep -q -v grep
  PROCESS_2_STATUS=$?
  # If the greps above find anything, they exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit 1
  fi
done

then have your CMD command run the script like so:

CMD ./my_wrapper_script.sh

another option Docker lists in its docs includes using supervisord which should with thew appropriate configuration handle all the main processes you intend to run in your container, here's how to configure it. remember though that its another package to install on your container or base image.

Also because node doesn't just serve website files staticly from a default directory you would need to use the node command and point to the parent JS file that should define a node webserver to serve the files like express or http, see this example. Though if you want to use a production webserver you better build your project and use a commercial webserver like nginx or apache http, for them you build the project (possibly inside the Dockerfile) put them in their respected directories and start the web server process, here's an example for that

Upvotes: 3

ddegasperi
ddegasperi

Reputation: 752

If you simply will run both apps, you could split your Dockerfile into 2 Dockerfiles, one for the backend and one for the frontend and reference them in a docker-compose.yml

Example for docker-compose.yml:


version: '3.9'
services:
  backend:
    image: backend:${TAG:-latest}
    build:
      context: ./backend
      dockerfile: Dockerfile

  frontend:
    image: frontend:${TAG:-latest}
    build:
      context: ./frontend
      dockerfile: Dockerfile

To run both services with docker-compose, exec docker-compose up

Upvotes: 0

Related Questions