Leonard
Leonard

Reputation: 561

Nginx proxy_buffer_size is not enought for cache key

I need to cache POST response in my work. So I have the next location:

location /api {
    client_max_body_size 1m;
    client_body_buffer_size 1m;
    proxy_buffers 16 128k;
    proxy_busy_buffers_size 256k;
    proxy_pass http://127.0.0.1:8080;
    proxy_cache api;
    proxy_cache_methods POST;
    proxy_cache_key "$request_uri|$request_body";
    add_header X-Cache-Status $upstream_cache_status;
}

My nginx version is 1.8.0

When I am sending POST request with large body I get this message in nginx error.log:

proxy_buffer_size 4096 is not enough for cache key, it should be increased to at least 129024

Nginx documentation has the next description:

proxy_buffer_size Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. It can be made smaller, however.

What is the relation between proxy_buffer_size and cache key? How can I cache response in this case?

Upvotes: 3

Views: 13520

Answers (2)

Leonard
Leonard

Reputation: 561

If you want to use $request_body in the cache key, you should set proxy_buffer_size to a value greater than the sum of client_body_buffer_size and the resoponse header size.

See: http://nginx.2469901.n2.nabble.com/upstream-sent-too-big-header-while-reading-response-header-from-upstream-td7588354.html

Upvotes: 0

peixotorms
peixotorms

Reputation: 1283

First I would like to recommend using GET instead of POST, so you can use proxy_cache_key "$request_method$host$request_uri"; and therefore have no need to adjust your buffers for this to work.

Also while increasing the buffers might work, it adds a lot of overhead because you're including the whole request body in your cache key. Similar to REDIS, Memcache and others, a cache key should be unique and short (and you are making it too big).

If you must use POST you need to tune it to a few Megabytes, like this:

location /api {
    client_max_body_size 12m;
    server_names_hash_bucket_size 255;
    proxy_buffers 8 2m;
    proxy_buffer_size 12m;
    proxy_busy_buffers_size 12m;
    client_body_buffer_size 12m;
    proxy_pass http://127.0.0.1:8080;
    proxy_cache api;
    proxy_cache_methods POST;
    proxy_cache_key "$request_uri|$request_body";
    add_header X-Cache-Status $upstream_cache_status;
}

Having a big buffer won't use up all your memory, but according to http://mailman.nginx.org/pipermail/nginx/2013-September/040447.html "it can be used as a DoS vector if an attacker is allowed to open many connections but you can't afford them all to allocate client_body_buffer_size buffer".

Also it seems that acording to this answer: Nginx proxy_cache_key $request_body is ingored for large request body when $content_length > client_body_buffer_size, then the request body is written to a file and the variable $request_body == ""... so this is another thing to consider, as your $request_body might be empty and you will get colisions on your cache.

Having said that, I would strongly recommend for you to change from POST to GET, add a COOKIE or find some other way to identify your requests / users. If you're having a large amount of big POST requests, it won't scale very well with many requests.

Upvotes: 4

Related Questions