Reputation: 119
I have a main thread that needs to run continuously, and it should create new processor threads for each data it receives and they should run continuously as well, but my problem is, that the main thread's run function runs only once, the child thread is blocking the while in the main thread's run.
import threading
threads = []
class MainThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
#some functions here
def run(self):
while True:
print "main"
#do some stuff
data = ""
client = Client()
if data == "something":
c = 0
found = False
while not found and c<len(threads):
if threads[c].client == client:
threads[c].doSomeStuff(data)
found = True
if not found:
DataHandler(data, client)
class DataHandler(threading.Thread):
def __init__(self, data, client):
threading.Thread.__init__(self)
self.data = data
self.client = client
global threads
threads.append(self)
def doSomeStuff(self, data):
self.data = data
#some IO and networking stuff
#some functions here
def run(self):
while True:
if data is not None:
print "data"
#do some stuff with data
MainThread().start()
my output is:
main
data
data
data
.
.
.
How am I managed to start a DataHandler
thread parallel with the MainThread
?
Upvotes: 0
Views: 2089
Reputation: 11730
Python threading.Thread
is not a good choice for CPU intensive busy loops because of the GIL. According to https://wiki.python.org/moin/GlobalInterpreterLock
the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe.
If you need a busy loop, switch to the thread.multiprocessing
stblib instead (https://docs.python.org/2/library/multiprocessing.html) to have the OS scheduler handle time slice allocation. From the docs
The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.
Upvotes: 1