MAckenzie
MAckenzie

Reputation: 35

How do I run a parallel server in the background inside a docker container to test it

I want to create a CI/CD pipeline that will automatically build, test and publish a docker container as soon as I push a commit onto github.

To do this I am creating a container using a Dockerfile which looks like this:

FROM python:3.11.9-slim-bookworm
ADD . /
RUN apt-get update --fix-missing && apt-get install -y --fix-missing build-essential
RUN apt-get -y update && apt-get -y install curl
RUN make install 
RUN make lint
RUN make serve & sleep 5 
RUN ./service.sh
RUN make test 
EXPOSE 8000

The make serve in this case is running a fastapi application. Here is the Makefile for reference:

install: 
    pip install -r requirements.txt 

lint: 
    pylint --disable=R,C *.py 

serve: 
    python app.py 

test: 
    python -m pytest -vv test_app.py 

clean_cache: 
    rm -rf __pycache__ .pytest_cache 

Here service.sh is an executable that waits for the fastapi server to launch as it is sleeping for 5 seconds after launching. This is done with the '&' symbol, so the server keeps running in the background. This is something that worked for me when I was trying to do this exact same thing in Github Actions.

Here is the service.sh file:

#!/bin/bash

for i in {1..10}; do
  if curl -s http://localhost:8000/; then
    echo "Service is up!"
    break
  fi
  echo "Waiting for service to start..."
  sleep 5
done

This code was good enough for github actions. The tests ran successfully and the server kept running in the background. However, When I try to do the same with Docker containers, when I run the make test command, I get a connection pool error indicating that the server has already stopped. So what's the problem? How do I run a server in the background for a docker container?

i tried using the same work around that I used for Github actions based on the post I made over here: How do I run a server concurrently in GitHub Actions

But this doesn't seem to work for Dockerfiles.

Upvotes: 0

Views: 60

Answers (1)

David Maze
David Maze

Reputation: 159749

A Docker container normally only runs one process. As was noted in a comment, an image doesn't persist running processes in any way; if a RUN command tries to start a background process, it will be killed at the end of that RUN command.

I would try to limit the Dockerfile to only the things you need to build the image. That's doubly true if you have a CI system: you can have the CI environment (and not the Dockerfile) run the linter, and then you can run the integration tests on the built image after the Dockerfile completes.

If you also get rid of the Makefile wrapper here, then your Dockerfile looks like a very routine Python Dockerfile:

FROM python:3.11.9-slim-bookworm
WORKDIR /app  # don't put files in the root directory
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive \
    apt-get install -y --no-install-recommends build-essential curl
# just one apt-get install line

# only install Python dependencies (from cache on rebuilds)
COPY requirements.txt ./
RUN pip install -r requirements.txt

# then install the rest of the application
COPY ./ ./  # prefer COPY to ADD

EXPOSE 8000
CMD ["./app.py"]

Now in your CI environment, you need to do four things:

  1. Run the linter and any other static checkers, before you invoke Docker at all

    pylint --disable=R,C *.py 
    pytest -m 'not integration'
    
  2. Build the image

    docker compose build
    
  3. Start the container stack, run the integration tests, and clean up

    docker compose up -d
    pytest -m integration
    docker compose down -v
    
  4. If that was all successful, push the image

    docker compose push
    

Note that we've moved the linting, unit testing, and integration testing (that depends on the running service) out of the Dockerfile. The service is started as the image's default CMD, in the foreground, as the only process in the container.

Upvotes: -1

Related Questions