Reputation: 2716
I heard threading is not very efficient in Python (compared to other languages).
Is this true? If so, how can a Python programmer overcome this?
Upvotes: 12
Views: 8092
Reputation: 1
after finding solution (below) I wonder why python still has threading class
from multiprocessing import Pool
def some_function(x):
return x*x
Xs = [i for i in xrange(1, 1000)] # make 1, 2, ..., 999, 1000
pool = Pool(processes=16) # start 16 worker processes on 16 CPU
print pool.map(some_function, Xs) # print out
Upvotes: -1
Reputation: 8036
You overcome this by using multiprocessing instead! It's as simple as multithreading in python but gives you the full power of all your cpu cores.
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.
Upvotes: 7
Reputation: 172229
The threading is efficient in CPython, but threads can not run concurrently on different processors/cores. This is probably what was meant. It only affects you if you need to do shared memory concurrency.
Other Python implementations does not have this problem.
Upvotes: 1
Reputation: 41486
CPython uses reference counting with a cyclic garbage collector for memory management. To make this practical, it has a mechanism called the "global interpreter lock" which protects the reference counting system, along with all the other interpreter internals.
On a single-core machine, this doesn't matter - all threading is faked via time-slicing, anyway. On a multiple core machine, it makes a difference: a CPU bound Python program running on CPython won't make use of all of the available cores.
There are a number of possible responses to this:
If threads are being used to convert blocking IO to a non-blocking operation, then that works just fine in standard CPython without any special modifications - IO operations already release the GIL.
Upvotes: 12
Reputation: 4040
The reason people say that multi-threading is not very efficient in python is because of the Global Interpreter Lock. Because of the way the interpreter is written, only one thread can safely execute code in the interpreter at the same time.
This means that if you have threads which are quite heavily compute bound, that is, doing lots of stuff in the interpreter, then you effectively still only have the performance of a single threaded program. In this case you might be better off using the multiprocessing module, which has the same interface as the multithreading module but launches multiple copies of the interpreter (the downside of this is that you will have to explicitly share memory).
Where you still can reap speed gains from multithreading in python is if you are doing something that is heavily IO bound. While one thread is waiting for disk or network i/o the other threads can still execute, because when threads block they release the interpreter lock.
Upvotes: 22
Reputation: 78850
One option is to use a different implementation of Python, like Jython or IronPython. That way, you still get the benefits of having the Python language without having to deal with the GIL. However, you wouldn't have the ability to use the CPython-only libraries.
Another option is to use different constructs than threads. For example, if you use Stackless Python, Tasklets are an alternative.
Upvotes: 2