Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after

I'm having my first steps into Docker/CI/CD.

For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:

Dockerfile.yml

# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace

## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build

# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}

docker-compose.yml

## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
  - docker:dind

variables:
  DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
  DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest

before_script:
  ## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
  - apk update && apk add openssh-client
  - eval $(ssh-agent -s)

  ## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
  - echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
  - mkdir -p ~/.ssh
  - chmod 700 ~/.ssh

  ## Make sure that ssh will trust the new host, instead of asking
  - echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config

  ## Test that everything is setup correctly
  - ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}

stages:
  - deploy

deploy:
  stage: deploy
  script:
    ## Login this machine into Docker registry, creates a production build and push it to the registry.
    - docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
    - docker build -t ${DOCKER_IMAGE_TAG} .
    - docker push ${DOCKER_IMAGE_TAG}

    ## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
    - ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
    - docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
    - docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}

    # Everything works, exit.
    - exit 0
  only:
    - master

In a nutshell, on Gitlab CI, I do the following:

  1. (before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;

  2. (deploy) I build my image and push it to my public docker hub repository;

  3. (deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.

The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.

If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).

I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a, it doesn't listen anything.

I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"], which shouldn't make my container gets deleted because it has a process running.

What I'm doing wrong? I'm lost.

Thank you in advance.

Upvotes: 0

Views: 2528

Answers (1)

My question was answered by d g - thank you very much!

The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh connection instruction.

Changed my .gitlab.yml file from:

    ## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
    - ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
    - docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
    - docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}

To:

# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"

Upvotes: 1

Related Questions