Reputation: 285
I am trying to fetch the meta information from URL results passed after a search. I have been using the OpenGraph library and also PHP's native get_meta_tags
function to retrieve the meta tags.
My problem is when I am reading through the contents of a URL that happens to be a .m4v
extension. The program tries to read the contents of that file but it is way too large (and not mention, completely useless as it is all junk) and my program refuses to let it go. Therefore, I am stuck till the program throws a timeout error and moves on.
Is there any way to stop reading the contents of the file if it is way too large? I tried file_get_contents()
with the maxlen
parameter, but it still seems to read through the entire page. How can I quickly determine if a file is structured with tags before I dive in to farm it for meta?
Upvotes: 1
Views: 250
Reputation: 10169
get_headers() is what you need, there's a Content-Type and Content-Length in the response that you might be interested in. You might want to:
$headers=get_headers($url,1);
Upvotes: 2
Reputation: 2166
Use php's filesize($yourFile);
to find the file size in bytes:
$size = filesize($yourFile);
if ($size < 1000) {
$string = file_get_contents($yourFile);
}
Upvotes: 1