Coomer
Coomer

Reputation: 587

Nginx Proxy to Files on Local Disk or S3

So I'm moving my site away from Apache and onto Nginx, and I'm having trouble with this scenario:

User uploads a photo. This photo is resized, and then copied to S3. If there's suitable room on disk (or the file cannot be transferred to S3), a local version is kept.

I want requests for these images (such as http://www.mysite.com/p/1_1.jpg) to first look in the p/ directory. If no local file exists, I want to proxy the request out to S3 and render the image (but not redirect).

In Apache, I did this like so:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^p/([0-9]+_[0-9]+\.jpg)$ http://my_bucket.s3.amazonaws.com/$1 [P,L]

My attempt to replicate this behavior in Nginx is this:

location /p/ {
    if (-e $request_filename) {
        break;
    }
    proxy_pass http://my_bucket.s3.amazonaws.com/;
}

What happens is that every request attempts to hit Amazon S3, even if the file exists on disk (and if it doesn't exist on Amazon, I get errors.) If I remove the proxy_pass line, then requests for files on disk DO work.

Any ideas on how to fix this?

Upvotes: 15

Views: 25808

Answers (5)

RubenCaro
RubenCaro

Reputation: 1466

You could improve your s3 proxy config like this. Adapted from https://stackoverflow.com/a/44749584:

location /p/ {
    try_files $uri @s3;
}

location @s3 {
  set $s3_bucket        'your_bucket.s3.amazonaws.com';
  set $url_full         '$1';

  proxy_http_version     1.1;
  proxy_set_header       Host $s3_bucket;
  proxy_set_header       Authorization '';
  proxy_hide_header      x-amz-id-2;
  proxy_hide_header      x-amz-request-id;
  proxy_hide_header      x-amz-meta-server-side-encryption;
  proxy_hide_header      x-amz-server-side-encryption;
  proxy_hide_header      Set-Cookie;
  proxy_ignore_headers   Set-Cookie;
  proxy_intercept_errors on;

  resolver               8.8.4.4 8.8.8.8 valid=300s;
  resolver_timeout       10s;
  proxy_pass             http://$s3_bucket$url_full;
}

Upvotes: 17

Anatoly
Anatoly

Reputation: 15530

Thanks to keep my coderwall post :) For the caching purpose you can improve it a bit:

http {

  proxy_cache_path          /tmp/cache levels=1:2 keys_zone=S3_CACHE:10m inactive=24h max_size=500m;
  proxy_temp_path           /tmp/cache/temp;

  server {
    location ~* ^/cache/(.*) {
      proxy_buffering        on;
      proxy_hide_header      Set-Cookie;
      proxy_ignore_headers   Set-Cookie;
      ...
      proxy_cache            S3_CACHE;
      proxy_cache_valid      24h;
      proxy_pass             http://$s3_bucket/$url_full;
    }
  }

}

One more recommendation is to extend resolver cache upto 5 min:

resolver                  8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout          10s;

Upvotes: 4

Dan Gayle
Dan Gayle

Reputation: 2357

Shouldn't this be an example of using try_files?

location /p/ {
    try_files $uri @s3;
}

location @s3{ 
    proxy_pass http://my_bucket.s3.amazonaws.com;
}

Make sure there isn't a following slash on the S3 url

Upvotes: 36

Coomer
Coomer

Reputation: 587

I ended up solving this by checking to see if the file doesn't exist, and if so, rewriting that request. I then handle the re-written request and do the proxy_pass there, like so:

location /p/ {
  if (!-f $request_filename) {
    rewrite ^/p/(.*)$ /ps3/$1 last;
    break;
  }
}

location /ps3/ {
  proxy_pass http://my_bucket.s3.amazonaws.com/;
}

Upvotes: 0

Chris Farmiloe
Chris Farmiloe

Reputation: 14185

break isn't doing quite what you expect nginx will do the last thing you ask of it, which makes sense if you start digging around making modules... but basically protect your proxy_pass with the does-not-exist version

if (-f $request_filename) {
    break;
}
if(!-f $request_filename)
    proxy_pass  http://s3;
}

Upvotes: 0

Related Questions