Reputation:
I've a directory outside the webroot with images, css and javascripts.
These files often change.
I could write a script which locates the file, figures out the mime type and outputs it to the browser.
Or
i could locate the file, copy it to a webaccessable directory and redirect to that file using header location. When the file is requested again, there first will be a check if the file is modified, and again a redirect.
What would be a better performance wise? Every request a readfile, or every request a timestamp check and redirect ( so 2 requests instead of one )
Upvotes: 2
Views: 2511
Reputation: 321
Redirecting to a static file seems faster and performs better at caching, at the expense of crawlability. Echoing an entire file saves you from redirection penalties but browsers are not good in caching dynamic files.
Upvotes: 0
Reputation: 793
How about a symbolic link directly to the file, not the entire directory? You could even make it a 'static' filename, and then let the web server do the modification timestamp check and caching, which would likely be much, much faster.
Benchmarks though, of course :)
Upvotes: 1
Reputation: 19353
Another suggestion: if you have control of the filesystem, you could perhaps create a symbolic link in the web-accessable directory to the image file? Either using exec() to invoke the 'ln' command or maybe the PHP symlink() function might work.
Upvotes: 1
Reputation: 551
First rule of performance: benchmark, don't speculate.
I'll promptly break that first rule and speculate that the readfile will be faster, because it eliminates a network round-trip.
How much performance do you need? The very fastest way to do this would be to setup a separate static-content web server under a subdomain (e.g. http://static.mysite.com/foo.jpg ) on a completely different machine, and then let that web server there handle the often-changing image/css/javascript files.
Upvotes: 2