Reputation: 57
I have a script using curl to get the html of a webpage. Sometime it gets information perfectly, while other times it seems to hang. I put in a timeout provision -
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
So now the script no longer hangs, but when it does timeout, it doesn't return any of the html. Is there a way for curl to get all html it's received before the timeout? Or, is there some other way to achieve the idea - "get all the html you can within a specified period of time from a URL"?
Upvotes: 1
Views: 51
Reputation: 98881
Use CURLOPT_FILE
Example:
<?php
$ch = curl_init("http://www.example.com/");
$fp = fopen("/path/to/save/file", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
echo file_get_contents("/path/to/save/file");
?>
Upvotes: 1
Reputation: 326
Using a stream wrapper, you can even parse the data on-the-fly. Have a look at this: Manipulate a string that is 30 million characters long
Upvotes: 0