perrygeo
perrygeo

Reputation: 385

Large Nginx/uwsgi-served content hangs for keepalive_timeout seconds

I'm serving a dynamically generated pdf through django 1.5.1 using a view like:

pdf = generate_pdf()
response = HttpResponse(pdf, mimetype="application/pdf")
response['Content-Disposition'] = 'attachment; filename=1234_2013_10_30.pdf'
return response

Works 100% of the time on the development server. However, I'm using uwsgi version 1.9.18.2 and nginx version 1.1.19 and get the following behavior:

$ curl -v -o test.out "http://localhost/demo/awc.pdf?submissionType=addition&permit=1234"
...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /demo/awc.pdf?submissionType=addition&permit=1234 HTTP/1.1
> User-Agent: curl/7.21.2 (Windows) libcurl/7.21.2 OpenSSL/1.0.0a zlib/1.2.3
> Host: localhost
> Accept: */*
>
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0

< HTTP/1.1 200 OK
< Server: nginx/1.1.19
< Date: Wed, 30 Oct 2013 22:14:23 GMT
< Content-Type: application/pdf
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Language, Cookie
< Content-Language: en-us
< Content-Disposition: attachment; filename=1234_2013_10_30.pdf
<
{ [data not shown]
100 2313k    0 2313k    0     0  270k       0 --:--:--  0:00:12 --:--:--     0
.....
100 2313k    0 2313k    0     0  77452      0 --:--:--  0:00:30 --:--:--     0* transfer closed with outstanding read data remaining
100 2313k    0 2313k    0     0  75753      0 --:--:--  0:00:31 --:--:--     0* Closing connection #0

curl: (18) transfer closed with outstanding read data remaining

In summary, the client gets a response in 10 seconds, downloads all the data in ~2 seconds and then hangs for additional 18 seconds.

Not coincidentally, my nginx configuration specifies keepalive_timeout 20s;. After waiting for keepalive_timeout seconds, the content is perfectly OK. I can "solve" the problem by setting keepalive_timeout to zero but that's not really a viable solution.

When the content is small (less than ~1MB) the problem inexplicably goes away.

> GET "http://localhost/demo/awc.pdf?submissionType=addition&permit=5678" HTTP/1.1
> User-Agent: curl/7.21.2 (Windows) libcurl/7.21.2 OpenSSL/1.0.0a zlib/1.2.3
> Host: localhost
> Accept: */*
>
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0< HTTP/1.1 200 OK
< Server: nginx/1.1.19
< Date: Wed, 30 Oct 2013 22:39:12 GMT
< Content-Type: application/pdf
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Language, Cookie
< Content-Language: en-us
< Content-Disposition: attachment; filename=1234_2013_10_30.pdf
<
{ [data not shown]
100  906k    0  906k    0     0   190k      0 --:--:--  0:00:04 --:--:--  246k* Connection #0 to host localhost left intact 

I am guessing it has something to do with the chunked encoding or lack of content-length header but I can't seem to find the magic incantation. Any ideas?

Upvotes: 2

Views: 2085

Answers (2)

amurrell
amurrell

Reputation: 2455

I was having the same problem but it was only happening on dynamically generated content as well. I am on nginx 1.5.8 and ubuntu 12.04LTS running a fastcgi_pass to php-fpm - and my keepalive_timeout was causing the hang as well!

I think that keepalive_timeouts are only (supposed to?) used for static content - but I am tricking my dynamic javascript file (js.php) like it's static so the keepalive timeout is getting applied?

If I can find a permanent solution as opposed to this explanation ...I'll post it. Ah yes, perhaps this is the answer, where in that it says to specify this in the nginx.conf file -

fastcgi_keep_conn on;

I still left my keepalive_timeout to only 1 second, but the solution provided in that accepted answer makes sense - my dynamic file was getting passed to fastcgi but the keepalive connection was not retained by fastcgi - here's also an official nginx doc on that.

By default, a FastCGI server will close a connection right after sending the response. However, when this directive is set to the value on, nginx will instruct a FastCGI server to keep connections open. This is necessary, in particular, for keepalive connections to FastCGI servers to function.

Upvotes: 0

perrygeo
perrygeo

Reputation: 385

Still unsure why the original problem occurs but found a decent workaround:

Disabling chunked transfer encoding in the nginx config seems to avoid the problem.

location / {
    uwsgi_pass unix:///var/run/uwsgi/app/socket;
    include uwsgi_params;
    # keepalive_timeout 0;
    chunked_transfer_encoding off;
}

Upvotes: 1

Related Questions