run_the_race
run_the_race

Reputation: 2418

Django + uWSGI + Nginx instantly results in upstream prematurely closed connection

Firstly I have read probably every stackoverflow post related to this topic this Easter weekend. E.g. upstream prematurely closed connection (uwsgi + nginx + django) does not solve my problem because it was poor code which didnt even run via manage.py runserver. Most of the posts are about timeouts. This error is thrown immediately on a file that is less than 30KB. All my timeouts on Nginx and uWSGI are set at the default of 60s. It is not a timeout issue, but maybe a cache issue.

I write some info from the database into a .xlsx file using openpyxl. When I return the file, it is about 27KB and it has about a 50% chance of not throwing the error. If it is quite a bit larger it fails every time, if it is quite a bit smaller it's okay every time. All other webpages work fine for the site. This is the error message:

2019/04/21 20:14:52 [error] 19712#19712: *46 upstream prematurely closed connection while reading response header from upstream, client: ipaddress, server: request: "GET / HTTP/1.1", upstream: "uwsgi://unix:////var/run/uwsgi.sock:", host: "www.mysite.com"

The file can be downloaded successfully via manage.py runserver. It also runs successfully when I start uWSGI directly. The only time it does not work is when Nginx passes the request to uWSQI, and then Nginx immediately fails when retrieving the response. Hence I have been going through Nginx documentation trying to find something.

This is my Nginx setup, set up a cache region:

uwsgi_cache_path /var/www/my_site/public/nginx_uwsgi_temp levels=1:2 keys_zone=myzone:64m inactive=10m;

And the virtual host part, with a mess of unsuccessful attempts from: http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html

# Finally, send all non-media requests to the Django server.
location / {
    uwsgi_pass django;
    # the uwsgi_params file you installed
    include /home/somefolders/uwsgi_params;

    # http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
    # Timeouts when talking to uWSGI
    # proxy_read_timeout 60s; # Default 60s
    # proxy_send_timeout 60s; # Default 60s
    # proxy_connect_timeout 60s; # Default 60s
    # Trying uwsgi protocol, within uwsgi .ini set "protocol = uwsgi"
    # http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html
    uwsgi_read_timeout 60s; # Default value 60s
    uwsgi_send_timeout 60s; # Default value 60s
    # uwsgi_cache
    # uwsgi_buffering on;
    # uwsgi_buffer_size 8k;
    # uwsgi_max_temp_file_size 1024m;
    # uwsgi_temp_path /var/www/public/nginx_uwsgi_temp 1 2;
    # client cache
    uwsgi_cache myzone;
    # uwsgi_cache_revalidate on;
    # uwsgi_cache_key $uri;
    # uwsgi_cache_valid any 10m;want
    # add_header X-Cache-Status $upstream_cache_status ;
    uwsgi_buffer_size 320k;
    uwsgi_buffers 8 320k;
    uwsgi_busy_buffers_size 320k;
    uwsgi_next_upstream off;
}

I could be barking up the wrong tree, but all my attempts point towards some Nginx configuration of cache.

I find it interesting that by default Nginx is set with:

uwsgi_buffers 8 4k

Which is about 8*4=24kb, about the point at which things start breaking down.

Upvotes: 0

Views: 658

Answers (1)

run_the_race
run_the_race

Reputation: 2418

After spending days on it, as per murphys law, every time, after I spend the time to ask a question, I figure it out. In my uWSGI configuration I had:

[uwsgi]
limit-as = 128

Well that was killing the process when it got too big. The strange thing is was calling the same uwsgi .ini file when running uWSGI directly, but it did not go over 128MB, only when Nginx calls it.

Flip. Deleted that line and all is well. I kept delaying asking cause I knew this would happen.

Upvotes: 1

Related Questions