CrazyCasta
CrazyCasta

Reputation: 28302

Requests not being distributed across gunicorn workers

I'm trying to write an app using tornado with gunicorn handling the worker threads. I've created the code shown below, but despite starting multiple workers it isn't sharing the requests. One worker seems to process all of the requests all of the time (not intermittent).

Code:

from tornado.web import RequestHandler, asynchronous, Application
from tornado.ioloop import IOLoop

import time
from datetime import timedelta
import os

class MainHandler(RequestHandler):
    def get(self):
        print "GET start"
        print "pid: "+str(os.getpid())
        time.sleep(3)
        self.write("Hello, world.<br>pid: "+str(os.getpid()))
        print "GET finish"

app = Application([
    (r"/", MainHandler)
])

Output in console (I refreshed 3 browser tabs easily within the 3 second window, yet they still use the same process and run sequentially):

2014-04-12 20:57:52 [30465] [INFO] Starting gunicorn 18.0
2014-04-12 20:57:52 [30465] [INFO] Listening at: http://127.0.0.1:8000 (30465)
2014-04-12 20:57:52 [30465] [INFO] Using worker: tornado
2014-04-12 20:57:52 [30474] [INFO] Booting worker with pid: 30474
2014-04-12 20:57:52 [30475] [INFO] Booting worker with pid: 30475
2014-04-12 20:57:52 [30476] [INFO] Booting worker with pid: 30476
2014-04-12 20:57:52 [30477] [INFO] Booting worker with pid: 30477
GET start
pid: 30474
GET finish
GET start
pid: 30474
GET finish
GET start
pid: 30474
GET finish

I've tried using IOLoop.add_timeout with asynchronous as well, nothing better in that case. By reading I realized that gunicorn might even be somehow looking inside and interpreting the asynchronous decorator as meaning that it can stuff them all down one thread so I reverted to what I've shown here. Just for my sanity's sake I've pastebinned the unedited version of what I've done.

In summary, why isn't gunicorn distributing my requests across workers?

Upvotes: 3

Views: 2570

Answers (1)

CrazyCasta
CrazyCasta

Reputation: 28302

Well, apparently the browser is causing this. By looking at wireshark I have determined that at least firefox (and I'm assuming chrome is doing the same thing) is serializing the requests when the URLs are the same. Perhaps this is so that if they're cacheable it can just reuse them.

Upvotes: 11

Related Questions