Bijoy
Bijoy

Reputation: 1131

Using builtin __import__() in normal cases

Here is where I checked the performance of __import__()

In [9]: %%timeit
   ...: math = __import__('math')
   ...: sqrt = math.sqrt
   ...: sqrt(7894561230)
   ...: 
The slowest run took 11.16 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 534 ns per loop

In [10]: %%timeit
    ...: from math import sqrt
    ...: sqrt(7894561230)
    ...: 
    ...: 
The slowest run took 10.23 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 979 ns per loop

Builtin __import__ module seems faster than traditional way to import,

So can it be used in code as i have used it, or is there any major harm in doing this, __import__ doc doesn't state any harm in doing this.

But it states

Direct use of __import__() is rare, except in cases where you want to import a module whose name is only known at runtime.

So my question is can it be used in normal cases too. Or is there any disadvantage of it ?

Upvotes: 1

Views: 159

Answers (1)

Right leg
Right leg

Reputation: 16740

Here is a small "benchmark". Let's define two functions:

def f1():
    import sys

def f2():
    sys = __import__('sys')

Bytecode comparison:

>>> dis.dis(f1)
  5           0 LOAD_CONST               1 (0)
              2 LOAD_CONST               0 (None)
              4 IMPORT_NAME              0 (sys)
              6 STORE_FAST               0 (sys)
              8 LOAD_CONST               0 (None)
             10 RETURN_VALUE

>>> dis.dis(f2)
  8           0 LOAD_GLOBAL              0 (__import__)
              2 LOAD_CONST               1 ('sys')
              4 CALL_FUNCTION            1
              6 STORE_FAST               0 (sys)
              8 LOAD_CONST               0 (None)
             10 RETURN_VALUE

The generated bytecodes have the same number of instructions, but they are different. So what about the timing?

>>> timeit.timeit(f1)
0.4096750088112782

>>> timeit.timeit(f2)
0.474958091968411

It turns out that the __import__ way is slower. In addition, it is far less readable than the classical import statement.

Conclusion: stick with import.


Now for a bit of interpretation...

I suppose that calling __import__ is slower than executing an import statement, because the bytecode generated by the latter is optimised.

Take a look at the instructions: the bytecode for __import__ just look like any other function call, with a CALL_FUNCTION instruction. On the other hand, the import statement results in a IMPORT_NAME instruction, which definetely looks like something dedicated to imports, and is probably executed in an optimised way by the interpreter.

As a matter of fact, the third instruction is the only true difference between the two bytecodes. So the difference between the two functions lies in the difference between IMPORT_NAME and CALL_FUNCTION.

Upvotes: 3

Related Questions