user3030969
user3030969

Reputation: 1575

Regarding memory and processes on a server

still a learner here. I have a Django website that is deployed on a Webfaction server since a month now.

Yesterday I tried to setup celery using supervisor to send emails in the background. I just finished setting it up and everything worked when suddenly I received this email from Webfaction:

    Hello,

    Right now (2013-12-23 00:06:06 UTC) it appears that your processes on Web330 are using a lot more memory than your plan allows.

    If you haven't read it yet, we recommend that you have a look at our "Reducing Memory Usage" article (http://docs.webfaction.com/software/general.html#reducing-memory-usage) for tips on how to keep your memory usage down.

    Your total allowed memory is 512MB and your current memory usage is 1023MB.

    Since your high memory usage is impacting other users on the server we had to kill your processes (our watchdog first sends a SIGTERM to your processes and then sends a SIGKILL a few seconds later).

    You need to either find a way to keep your memory down or you'll have to upgrade to a plan that allows more memory.

    Please respond to this message to let us know how you're dealing with the problem.

    Below is the list of processes that you're running with the memory that they use (the command used to list these processes is "ps -u hammad -o rss,etime,pid,command"):

    User - Memory - Elapsed Time - Pid - Command:
    --------------------------------------------
/home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 51MB - 0:16:44 - 428360 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 49MB - 0:16:44 - 428361 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 52MB - 0:16:44 - 428362 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 50MB - 0:16:44 - 428363 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 53MB - 0:16:44 - 428364 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 2MB - 0:16:44 - 428365 - /home/hammad/webapps/gccfishing/apache2/bin/httpd.worker -f /home/hammad/webapps/gccfishing/apache2/conf/httpd.conf -k start
    hammad - 11MB - 0:06:29 - 435573 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/supervisord -c supervisord_prod.conf
    hammad - 48MB - 0:06:28 - 435577 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435589 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435590 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435591 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435592 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435593 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435594 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435595 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435596 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435597 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 41MB - 0:06:27 - 435598 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435599 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435600 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435601 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435602 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435603 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 39MB - 0:06:27 - 435604 - /home/hammad/webapps/gccfishing/bin/python2.7 /home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    hammad - 0MB - 52 days, 14:45:11 - 702447 - 


    Regards,

    WebFaction team - http://www.webfaction.com

I'd like to know a few things regarding this:

  1. Why are there so many instances of Apache and what's that memory being used for? Doesn't Apache just handle requests? If so then what is in that memory of each instance?

  2. Why has supervisor spawned so many celery instances? This is my supervisord.conf file:

    [unix_http_server]
    file=/tmp/supervisor.sock   ; (the path to the socket file)
    
    [supervisord]
    logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
    logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
    logfile_backups=10           ; (num of main logfile rotation backups;default 10)
    loglevel=info                ; (log level;default info; others: debug,warn,trace)
    pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
    nodaemon=false               ; (start in foreground if true;default false)
    minfds=1024                  ; (min. avail startup file descriptors;default 1024)
    minprocs=200                 ; (min. avail process descriptors;default 200)
    
    [supervisorctl]
    serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket
    
    [program:celeryd]
    command=/home/hammad/webapps/gccfishing/bin/celery -A gccFishing worker -l info
    directory=/home/hammad/webapps/gccfishing/gccFishing/gccFishing
    numprocs=1
    autostart=true
    autorestart=true
    startsecs = 10 
    stopwaitsecs = 900 
    

And again what is each Celery instance's memory being used for? I'm just using it to send some html (without images or graphics) emails to say 15-20 people every now and then.

Does it require so much memory? Can the memory be cleared after after each task or does it go on accumulating?

Upvotes: 0

Views: 242

Answers (1)

sk1p
sk1p

Reputation: 6725

Why so many Apache instances? To concurrently answer requests. If your site does not need to handle many hits per seconds, you might want to reduce the number of Apache workers. You can also remove unused modules to further reduce memory usage.

Another possibility is to replace Apache with nginx, and mod_wsgi with something else, for example gunicorn. You might want to check in your dev environment how big each gunicorn process would be for your application. Nginx itself is very lightweight, on one of my production servers each nginx worker takes up less than 3MB.

What is the memory used for? Assuming you use mod_wsgi, it is not only used for Apache itself, but also for your web application. You could use a Python heap profiler, like Heapy, to drill down into individual memory usage. Generally, every bit of code you import and every object that is created takes up memory.

The number of celeryd workers defaults to the number of available CPUs and can be tuned with the CELERYD_CONCURRENCY setting. The workers are not directly spawned by supervisord, but rather by celery itself. If you don't need a lot of throughput, you can even reduce the number of workers to 1 or 2, and save quite a bit of memory.

Upvotes: 2

Related Questions