Reputation: 71
I am fairly new to docker and I don't get the right workflow for me now.
My goal is:
May it be possible to achieve these requirements, if my project structure is like the following:
- services
- react frontend
(I think it's okay to just put the built static files to the nginx html-folder)
- graphqlapi
Dockerfile
- authservice
Dockerfile
- another service in the future
Dockerfile
docker-compose.yml
I have the docker-compose.yml in the root folder. But the automatic build in docker hub says that it needs a Dockerfile there to build the image.
For me it would be okay to run all the services in just one image/container, because currently I just want to have it all run on the same machine.
So again my question: Is it possible to dockerize a multi service web application into one docker image/container for the free docker hub repository?
Upvotes: 0
Views: 1139
Reputation: 16374
It is possible to run multiple services inside a single container, but I highly discourage you to do it.
Anyway, you can do this bundling all services inside the same image and then (1) running a container with many services using, or (2) running them using a supervisor.
If your constraint is only the single image, you can do better, (3) running multiple containers using the same image, customizing the service to run on each.
The Docker documentation says (bold is mine):
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes).
and then it continues:
It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.
This is because Docker containers are attached to a single process (usually defined by ENTRYPOINT
/CMD
in Dockerfiles) and when this process dies, the entire container is stopped.
Containers are designed to isolate services: if you want a isolated environment with many, non isolated services (like in the first two ways described above), probably it's better to use a virtual machine.
The common idea of every approach is to pack every application you have in the same final image using a single Dockerfile:
FROM ubuntu
// RUN install dependencies for every app you have
// COPY all your binaries/apps (e.g. service 1, 2, 3) or build them
Following the first example in the Docker documentation you can start multiple services using a wrapper script that starts them in the background and the check every minute if all of them are running. When a service crashes, the entire container is stopped.
In this case your image will end with a command line like CMD ./my_wrapper_script.sh
.
As suggested in the comment above, you can use a supervisor inside the container to run multiple services avoiding the problem above. In this way you have many processes managed by the supervisor and if the supervisor crashes, all your service will be taken down (you have a single point of failure).
In this case your image will end with a line like CMD start-supervisor
.
If your constraint is only the single image and you can have multiple container this is the best approach. Just run multiple containers starting them using an explicit command
as the last parameter:
docker run your-image your-service-1
docker run your-image your-service-2
docker run your-image your-service-3
You can do it also using a docker-compose file.
With this approach you don't "break" one service per container rule having a more resilient deploy.
Upvotes: 1