Arshdeep
Arshdeep

Reputation: 4323

How to Get a Webpage's contents without CURL?

I need to get webpage's content ,I cant use Curl as it is not enabled.I tried the below code But it is not working.

$opts = array(
  'http'=>array(
    'method'=>"GET",
    'header'=>"Accept-language: en\r\n" .
              "Cookie: foo=bar\r\n"
  )
);

$context = stream_context_create($opts);   

$fp = fopen($_GET['url'], 'r', false, $context);
if($fp)
fpassthru($fp);
fclose($fp);
exit;

The code produce an error

Warning: fopen(http://www.google.com/search?&q=site:www.myspace.com+-intitle:MySpaceTV+%22Todd Terje%22) [function.fopen]: failed to open stream: HTTP request failed! HTTP/1.0 400 Bad Request 

Upvotes: 4

Views: 13160

Answers (6)

Sarfraz
Sarfraz

Reputation: 382616

You can use the file_get_contents function for that:

$content = file_get_contents('url/filepath here');
echo $content;

Note: If you want to read from secure protocol eg https, make sure that you have openssl extension turned on from php.ini.

Update:

From what you say, i suspect you have allow_url_fopen settings turned off from php.ini file, you need to turn that on to be able to read from urls.

Update 2:

It looks you are not specifying the correct url, I just checked, for example, if you simply put in www.google.com, it works fine:

$url = 'http://www.google.com';
$content = file_get_contents($url);
echo $content;

Upvotes: 3

mean.cj
mean.cj

Reputation: 123

 php file_get_contents() function

nadeausoftware.com/articles/2007/07/php_tip_how_get_web_page_using_fopen_wrappers

   /**
 * Get a web file (HTML, XHTML, XML, image, etc.) from a URL.  Return an
 * array containing the HTTP server response header fields and content.
 */
function get_web_page( $url )
{
    $options = array(
        CURLOPT_RETURNTRANSFER => true,     // return web page
        CURLOPT_HEADER         => false,    // don't return headers
        CURLOPT_FOLLOWLOCATION => true,     // follow redirects
        CURLOPT_ENCODING       => "",       // handle all encodings
        CURLOPT_USERAGENT      => "spider", // who am i
        CURLOPT_AUTOREFERER    => true,     // set referer on redirect
        CURLOPT_CONNECTTIMEOUT => 120,      // timeout on connect
        CURLOPT_TIMEOUT        => 120,      // timeout on response
        CURLOPT_MAXREDIRS      => 10,       // stop after 10 redirects
    );

    $ch      = curl_init( $url );
    curl_setopt_array( $ch, $options );
    $content = curl_exec( $ch );
    $err     = curl_errno( $ch );
    $errmsg  = curl_error( $ch );
    $header  = curl_getinfo( $ch );
    curl_close( $ch );

    $header['errno']   = $err;
    $header['errmsg']  = $errmsg;
    $header['content'] = $content;
    return $header;
}

thx : http://nadeausoftware.com/articles/2007/06/php_tip_how_get_web_page_using_curl

Upvotes: -3

mr.b
mr.b

Reputation: 4962

you can use old-fashioned code, like:

$CRLF = "\r\n";
$hostname = "www.something.com";

$headers[] = "GET ".$_GET['url']." HTTP/1.1";
$headers[] = "Host: ".$hostname;
$headers[] = "Accept-language: en";
$headers[] = "Cookie: foo=bar";
$headers[] = "";

$remote = fsockopen($hostname, 80, $errno, $errstr, 5);
// a pinch of error handling here

fwrite($remote, implode($CRLF, $headers).$CRLF);

$response = '';

while ( ! feof($remote))
{
    // Get 1K from buffer
    $response .= fread($remote, 1024);
}

fclose($remote);

Update: Good thing about this solution is that it doesn't rely on fopen wrappers.

Upvotes: 5

CharlesLeaf
CharlesLeaf

Reputation: 3201

Have you noticed that there is an ACTUAL space in your url between Todd and Terje? That might cause your problem, as browser usually encode it to + or %20.

Upvotes: 4

Andrey
Andrey

Reputation: 60055

use sniffer like WireShark to get contents of actual browser request. Then copy it and remove one by one, soon you will get minimal needed headers.

Upvotes: 0

dreeves
dreeves

Reputation: 26932

You can actually specify a URL instead of a filename in file_get_contents.

Upvotes: 1

Related Questions