Stafford Williams
Stafford Williams

Reputation: 9806

Docker uses all memory and crashes the system

I have an AWS t2.micro EC2 instance with docker on it, and I bring up the following instances;

Which results in something like this docker stats;

CONTAINER   MEM USAGE/LIMIT     MEM % 
wordpress   331.9 MB/1.045 GB   31.77%
nginx       18.32 MB/1.045 GB   1.75% 
mysql       172.1 MB/1.045 GB   16.48%

Then, I run siege's default 15 concurrent connections against it, which spawns multiple apache processes, reaching the memory limit of the EC2 instance, crashing docker and bash due to no more memory, requiring my intervention to get it all running again.

I have a couple of questions regarding this.

  1. Am I expecting too much? Should this setup be able to handle 15 concurrent connections? If so, what changes* need to be made?
  2. How can I automate recovery from this? Is there a way to detect that memory is reaching capacity and do something (like reject requests or similar) until memory usage decreases? Is there a way to keep the system stable during the high request volume so once it's over it does not require my intervention to bring it back up?

* I've already done this to drop mysql memory from 22% to 15%.

Upvotes: 3

Views: 6123

Answers (3)

Stafford Williams
Stafford Williams

Reputation: 9806

The biggest impact, which stopped the EC2 instance from falling over, was limiting the memory a docker container can use with the -m option per @palfrey's answer.

Some additionally tweaks were required to reduce the memory footprint and have the service respond to 15 concurrent users, albeit somewhat slowly. This included;

MySQL

WordPress

  • Disabling KeepAlive
  • Limiting servers:

    <IfModule mpm_prefork_module>
        StartServers            1
        MinSpareServers         1
        MaxSpareServers         3
        MaxRequestWorkers       10
        MaxConnectionsPerChild  3000
    </IfModule>
    

Docker

I created some docker images that extend the default images to include these optimisations;

Further details in my blog post.

Upvotes: 3

Tom Parker-Shemilt
Tom Parker-Shemilt

Reputation: 1697

Given a t2.micro only has 1GB total, and each of those containers has a 1GB limit on its own, have you tried limiting the max memory usage on each container (as per http://docs.docker.com/engine/reference/run/#user-memory-constraints) such that the total memory limit doesn't exceed 1GB?

Upvotes: 4

datasage
datasage

Reputation: 19573

  1. Probably, a micro only has 1GB of ram. You can run this configuration without docker just fine, but you do have to adjust for memory limitations. Docker probably adds some overhead. Is there a reason for running both nginx and apache?

  2. Generally you test and limit your threads to what the system can handle, there are probably things you can do with caching that will help improve performance. Apache, nginx, php-fpm all have settings that can control the number of threads that are allowed to be created.

Upvotes: 0

Related Questions