Reputation: 3126
I am providing a facility to my web users that they can upload their profile image from a url rather than uploading it from computer. I see here a mischievous user can provide url of a huge file or may be some url which is tailed to /dev/random
, very unlikely but can happen. Is there a way I can determine the size of file before fetching it completely to my server?
Upvotes: 2
Views: 577
Reputation: 13557
Depending on what you are doing to grab that remote file, there are different things you can do.
file_get_contents('http://foobar.com')
is quite convenient, it gives you the least amount of control. I don't see how you could do a HEAD request to grab the Content-Length
header up front.fsockopen()
will make you cry when dealing with HTTPS. curl is, well, curl. It's just as ugly as powerful. There are other options, like the HTTP Pecl (basically wrapping curl) as well.
CURLOPT_READFUNCTION
to be able to abort the download if the volume exceeds your limit. You should also check this, if (1) yielded a result, as this result might've been spoofed.In the very worst case you'll have made 1 HEAD and 1 GET request to acquire the Content-Length, as well as another GET request to download $yourLimit bytes.
Upvotes: 1
Reputation: 174728
Check for the Content-Length
header in the response from the server.
Upvotes: 0