Reputation:
By old habit I've always used require over include. Basicly the two functions are exactly the same - besides require throws an error if the file does not exist and thus stops the script - where include will just throw a warning and continue on with the script as if nothing happened.
Normally when I dynamically include files I use something along the lines of if file_exists then require it. Looking like this:
<?php if (file_exists($file)) { require $file; } else { /*error handling*/ } ?>
As far as I know, and please correct me if I'm wrong, this is widely accepted as best practice and the most efficient way to handle it.
But I thought of another approach, which seems to be slightly faster and smarter in my opinion:
<?php if (!include $file) { /* error handling */ } ?>
I have not tested it yet, but it seems logical to me that it should be faster than the file_exists/require combo, as that requires 2 harddrive interactions where the include approach only requires one.
From my tests it works as expected. It inherits the scope that you would expect and variables set in it is accessible.
Is there any reason not to do this?
edit: typo
edit 2: one argument against this could be the E_Warning thrown when it tried to include a file which does not exist. That could be avoided by passing @ at include... Like this:
<?php if(!@include $file) { /* error */ } ?>
Upvotes: 2
Views: 94
Reputation: 98469
No, this is not a "best practice".
You'll be including a file in one of three cases:
The only time you should use the if-include pattern you show here is in the second or third case, where having the file is nice but not necessary. In the first case, you should absolutely not do this - you should be using require
. In the second case, you should strongly consider using include
without the if
statement. In the third case, you might use a conditional include
, but see below.
The general "best practice" for managing includes in a PHP project where you can expect an include
statement to ever fail without tanking the program is to define __autoload
and have that handle your error correction, file-existence-checking, and so on.
To address your supposition that "it would be faster" to attempt the include and then detect failure: micro-optimization, especially that not backed by an empirical data, is the root of all evil. It doesn't matter whether it might be faster. First, determine whether you have a problem at all. If yes, then determine whether your include
statements are significant enough in runtime that they're worth the programmer-hours you'd spend making them marginally better. If yes, then test whether your alternate implementation works properly. If yes, then benchmark both versions and see if the alternate is faster. Only then should you consider deploying it.
Upvotes: 2