Matt Luongo
Matt Luongo

Reputation: 14859

Long running requests with gunicorn + nginx

I've put together an integration server for our Django-powered application. A few of the features are still experimental, and result in overly long requests.

I'm okay with the poor performance, for now, but I need to be able to integrate. Whenever we use the feature that leads to a long request, the app hangs (as expected) and then, after maybe a minute and a half, returns a '502 - Bad Gateway'. The rest of the app works fine.

I checked the gunicorn log, and whenever this happens I get a line like

2012-01-20 17:30:13 [23128] [DEBUG] GET /results/
2012-01-20 17:30:43 [23125] [ERROR] WORKER TIMEOUT (pid:23128)
Traceback (most recent call last):
  File "/home/demo/python_envs/frontend/lib/python2.6/site-packages/gunicorn/app/base.py", line 111, in run
    os.setpgrp()
OSError: [Errno 1] Operation not permitted

however, this happens long before the actual worker timeout, which I've set to 10 minutes just to make sure. Here's part of the upstart script that runs gunicorn.

description "..."

start on runlevel [2345]
stop on runlevel [!2345]
#Send KILL after 5 seconds
kill timeout 5
respawn

env VENV="/path/to/a/virtual/env/"

#how to know the pid
pid file $VENV/run/guniconr-8080.pid

script
exec sudo -u demo $VENV/bin/gunicorn_django --preload --daemon -w 4 -t 600 --log-level debug --log-file $VENV/run/gunicorn-8080.log -p $VENV/run/gunicorn-8080.pid -b localhost:8080 /path/to/settings.py
end script

I'm running gunicorn version 0.13.4. Any help would be greatly appreciated.

This question is a cross-post from ServerFault.

Upvotes: 3

Views: 7383

Answers (2)

Sina Khelil
Sina Khelil

Reputation: 1991

Have you tried Asynchronous workers in Gunicorn?

Especially good for slow requests et al..

Upvotes: 0

Reinout van Rees
Reinout van Rees

Reputation: 13634

Are you connecting directly to gunicorn? Or is there an ngnix or so in between? There are some 90 second timeouts in nginx if I recall correctly.

As an aside, for such ill-performing requests, there are generally two solutions:

  • Cache the result and get a cron job to call a custom django management command that does the calculation and fills the cache.

  • An asynchronous task queue like celery does the actual calculation and you do a separate request to check if it's ready.

Upvotes: 4

Related Questions