Reputation: 499
I have a pretty basic question about docker that I can't seem to get an answer to.
What is the difference between having 1 container running nginx and 500 virtual hosts and 500 containers each based off an nginx image (each with different configs)?
Seems like maybe the later case (500 containers) would have the memory requirements of a container multiplied by 500. But maybe docker is smarter than that (it seems aufs can share memory somehow)?
Basically wondering how to setup a system for hosting many low-traffic wordpress instances. It is ok to make a new container for each instance (nginx + php)?
Upvotes: 9
Views: 4027
Reputation: 9481
An application memory footprint depends on several things:
All docker containers share the same kernel, so it is reused by all instances. AUFS storage driver lets you share loaded application code so that it's also loaded once for all containers.
Application data both static and operational is never shared between containers. So you multiple this footprint by 500.
Kernel resources and operational application data is never shared in either scenario. If a user requests a page from blogA and blogB this page is going to be created and sent to the user no matter what.
In your case most likely one nginx process with 500 virtual hosts will have less memory footprint. By how much is very hard to tell, depends on how busy the blogs are how much network buffering is to be done, do you have a shared database and memcache server. The only sure way to tell is to set it up and observe.
However with containers you can have multiple boxes, so when things get tight you can just move a single container to a separate box without affecting the rest of your users, also you can make more instances of a particular blog if it gets very busy and spread instances over several boxes. Look into things like Docker-Swarm.
Another advantage of containers is that you can have very simple configuration for individual nginx instead of a monster with 500 virtual hosts.
Upvotes: 6