Julian
Julian

Reputation: 2845

nginx limiting the total cache size

I am using nginx to cache requests to my uwsgi backend using

uwsgi_cache_path /var/cache/nginx/uwsgi keys_zone=cache:15M max_size=5G;

My back-end is setting a very long expires header (1 year+). However, as my system runs, I see the cache topping out at 15M. It gets up to that level, then prunes down to 10M.

This causes a lot of unnecessary calls to my back end. When I change the keys_zone size it seems to control the size of the entire cache. It seems to ignore the max_size and instead substitute the keys_zone size. (*)

Can anyone explain this behavior? Is there a known bug in this version? Am I missing the point? I don't want to allocate 5G to the cache manager..

# nginx -V
nginx version: nginx/1.2.0
built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) 
TLS SNI support enabled
configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --user=www-data --group=www-data --with-http_ssl_module --with-http_stub_status_module

(*) Update: I guess this was my overactive imagination trying to find a pattern in the chaos.

Upvotes: 5

Views: 11053

Answers (1)

Chuan Ma
Chuan Ma

Reputation: 9914

Expires header (and some other headers) is honoured by nginx to determine if a response is cacheable, but it's not used to determine how long to cache it.

By default, your inactive cache will be deleted after 10 min. Could you increase that number to see if it makes a difference?

proxy_cache_path path [levels=levels] keys_zone=name:size [inactive=time] [max_size=size] [loader_files=number] [loader_sleep=time] [loader_threshold=time];

Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness. By default, inactive is set to 10 minutes.

Reference: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path

Upvotes: 5

Related Questions