Reputation: 1915
I have a toy example that demonstrates multithreading and locks. Without locks, I obviously get a bad value for the global counter
variable. However, when I put in the threading.Lock
in, it leads to even more inconsistencies. Also, multiprocessing.Lock()
doesn't completely fix the problem either.
import threading
from multiprocessing import Process, Lock
num_experiments = 200
num_threads = 5
iterations_in_one_thread = 500
def f():
global counter
for i in range(iterations_in_one_thread):
with lock:
counter += 1
bad_count = 0
# lock = threading.Lock()
lock = Lock()
for x in range(num_experiments):
counter = 0
threads = []
for i in range(num_threads):
t = threading.Thread(target=f)
threads.append(t)
t.start()
for i in threads:
t.join()
if counter != num_threads * iterations_in_one_thread:
bad_count += 1
print counter
print "Bad count:", bad_count
print "Total runs:", num_experiments
Output I expect: Bad count: 0
Output I get: Bad count: 3
(or sometimes upto 6)
I'm on python 2.7.3
Python 2.7.13 |Anaconda custom (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Upvotes: 0
Views: 731
Reputation: 101
I think that you just made a simple typo.
for i in threads:
t.join()
should be
for i in threads:
i.join()
Otherwise, you're only joining on the last thread. Also, you probably shouldn't mix the multiprocessing library with the threading library. Stick with threading.Lock
.
Upvotes: 2