skilgarriff
skilgarriff

Reputation: 91

Proper Docker Setup for Multiple Websites

I'm wondering what the proper flow is for a setup to host multiple websites using a simple LEMP stack. Suppose I have a VM from Digital Ocean. I want to host 3 websites on this one VM. To accept requests, I either have a HAproxy container or Nginx virtual host container sitting at the front. That then routes the specific requests to a stack of containers that handles the application

Requests => Nginx/HAproxy =>

(website1.com) => (Nginx, php-fpm, mysql stack)

(website2.com) => (Nginx, php-hpm, mysql stack)

(website3.com) => (Nginx, php-hpm, mysql stack)

If this is the proper setup, how do I go about scaling the individual websites. If website 2 gets a lot more traffic, and needs more containers would I scale up the entire stack (nginx, php-fpm, mysql) x2 or do I let the Nginx container at the front of the stack load balance between multiple php-fpm instances?

website2.com => 1(nginx, php-fpm, mysql) or 2(nginx, php-fpm, mysql) using round robin.

OR

website2.com => (nginx, php-fpm, php-fpm, php-fpm, mysql) where that nginx container handles round robin between the php-fpm containers.

Further side note, where do I place my SSL certificates for each website? Do I put them in the HAproxy/Nginx virtual host container, or do I just have the exist in the stack nginx container.

Upvotes: 1

Views: 1514

Answers (1)

Edson Marquezani Filho
Edson Marquezani Filho

Reputation: 2696

First of all, be aware that containers are supposed to run a single process, not stacks. Although it's possible to run an init-like process spawning multiple processes inside a single container, or even something like Supervisord to achieve this, that's not a best practice. Containers were meant to be used as micro-services.

So, without considering the database, one possible scenario would be:

  1. Multiple application containers, one per website, running php-fpm;
  2. One single webserver container, running Nginx and including a vhost config for each website, and linked to all your application containers;
  3. One single load balancer container, running whatever you choose as a solution for proxy-balancing, with all the configuration needed for the websites, including SSL.

So, I understand you would have, at least 2 layers: Load Balancer/Proxy -> Applications. So, 3) goes in the upper layer, and 1) and 2) goes in the other, which are supposed to run on the same server.

Regarding the database service, I'm not sure it's a good choice to run it in a container. You would have to map a data directory from the host (because containers are ephemeral, but your data is not), and it wouldn't make sense to be moving around your container to other servers without moving data along, so a container seems quite useless in this case. And you won't use the container to leverage deployments in this case, anyway, so I can't see much advantage in it. It would be better just to have another server for the database.

About scaling, if you decide to run all sites on the same server, you don't have much choice but scaling everything together. Alternatively, you could split things up by service. You would have a group of machines for each "microservice" (lets say, website1+nginx1, website2+nginx2, etc). This way you could scale every service independently, but this would probably mean a lot of overhead and resource waste. It all depends on how complex you think you need to get.

This is not an easy question to answer and there isn't right and wrong when it comes to architectures like that. It's just a matter of deciding what is a good trade-off to you.

Upvotes: 1

Related Questions