Joe Woodhouse
Joe Woodhouse

Reputation: 337

Docker containers: services vs full applications

I'm having an ongoing debate with myself about how to think about and use Docker containers.

From the literature and examples it seems like a container should really provide a service, or part of a stack. For example a container might run MySQL, or Apache, or redis, or whatever. I can understand why this is nice and clean, and makes sense.

In our scenario, we want to host multiple totally separate web applications (e-commerce stores, wordpress sites, static websites, node.js applications) all on the same server, and we want to use Docker. For me therefore it makes more sense for each container to be totally self-container, with the entire stack in itself e.g. each of my possibly several running wordpress containers would each have their own LAMP installation.

To apply the one-container-one-service model to this scenario seems to be very complicated - each application will have dependencies on other containers in the system which will in turn be depended on by other things. And what if you require multiple versions of a particular service.

Whilst this seems like the way to go, it also seems like it could be very inefficient? I'm not an expert on how LXCs work but even though everything is containerised, there really are all those apache2 workers and mysqlds running on the system, with all their associated overhead - is there going to be performance problems?

Does anyone have any thoughts?

Upvotes: 6

Views: 2326

Answers (3)

saaj
saaj

Reputation: 25293

I agree with @Thomasleveil, and moreover I want to mention FLOSS Weekly episode 330 where original Docker's author and now CTO, points to the same fact, that Docker is just a building block. Educate yourself about it and use it as long as it fits your needs. A lot of people use Docker both ways -- process-per-container and application per container. Both ways have their pros and cons.

But also I want to warn against using Supervisor as PID1 process for managing multiple processes in a container. If you open supervisord.org, one of the first things you'll see is:

Unlike some of these programs, it [Supervisor] is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.

It means with Supervisor you'll have the zombie process problem described by phusion and minit's author. Moreover Supervisor only manages foreground processes because it spawns them as its children, and doesn't manage children of children. So forget about /etc/init.d/mysql start and go figure out how to run everything in foreground.

I managed to solve this problem with aforementioned minit and Monit. minit is needed because Monit is also unable to serve the role of PID1 (but it's planned for 2015, see #176). Monit is nice because it allows to express monitored service dependency (say, don't start an app until database is not up) and can handle daemons as is, monitor memory, CPU, and has web UI to see what's going on. Here's except from the dockerfile that I used this approach with on Debain Wheezy:

# installing the rest of dependencies

RUN apt-get install --no-install-recommends -qy monit

WORKDIR /etc/monit/conf.d
ADD webapp.conf ./
RUN echo "set httpd port 2812 and allow localhost" >> /etc/monit/monitrc

ADD minit /usr/bin/minit
RUN mkdir /etc/minit
RUN echo '#!/bin/bash\n  /etc/init.d/monit start; monit start all' \
  > /etc/minit/startup
RUN echo '#!/bin/bash\n \
  monit stop all; while monit status | grep -q Running; do sleep 1; done; \
  /etc/init.d/monit stop' > /etc/minit/shutdown
RUN chmod u+x /etc/minit/*
ENTRYPOINT ["/usr/bin/minit"]

And here's the Monit's webapp.conf:

check process webapp with pidfile /var/run/webapp/webappd.pid
  start program = "/etc/init.d/webapp start"
  stop program  = "/etc/init.d/webapp stop"

  if failed host 127.0.0.1 port 8080 for 2 cycles then restart
  if totalmem > 64 MB for 10 cycles then restart

  depends mysql, nginx
  group server

check process mysql with pidfile /var/run/mysqld/mysqld.pid
  start program = "/etc/init.d/mysql start"
  stop program = "/etc/init.d/mysql stop"

  group database

check process nginx with pidfile /var/run/nginx.pid
  start program = "/etc/init.d/nginx start"
  stop program  = "/etc/init.d/nginx stop"

  group server

Upvotes: 1

Thomasleveil
Thomasleveil

Reputation: 104225

Docker is a just a tool, use it as it best suits your needs.

Nothing prevents you from running multiple processes within a Docker container. One way to do this is to start the processes with supervisord as described in this Docker article.

You can also take a look at Phusions's approach of that use case. They highlight what could go wrong when running multiple processes in a Docker container and provide a Docker image (Phusion/baseimage) that helps getting things set up correctly.

Upvotes: 0

Thomas Uhrig
Thomas Uhrig

Reputation: 31623

I would prefer the one container per app approach. If you put every service in a single image/container, you have some advantages:

  • You can easily compose new stacks, use an Apache instead of Nginx.
  • You can reuse components, e.g. I use to deploy the same Logstash image with every application to collect logs.
  • You can use predefined services from the Docker Index (now called Docker Hub). If you need to setup a Memcached service, you can just pull the image.
  • You can control every service, e.g. to stop it or to update it. If you want to update your app, you only need to rebuild a single image and only upload/download a single image.

Since LXC and Docker seems to be very efficient, I wouldn't mind to use multiple containers. This is what Docker was designed for. And I think you will have a reasonable number, lets say <100 containers. So it shouldn't be a problem.

Upvotes: 3

Related Questions