Reputation: 304
I'm trying to run multiple CMT trackers simultaneously. For that reason, I'm setting a Pool of threads:
import argparse
import cv2
from multiprocessing import Pool
import numpy as np
import os
import sys
import time
import VARtracker
import util
CMT1 = VARtracker.CMT()
... # code lines removed
# Clean up
cv2.destroyAllWindows()
if args.inputpath is not None:
# If a path to a file was given, assume it is a single video file
if os.path.isfile(args.inputpath):
cap = cv2.VideoCapture(args.inputpath)
# Skip first frames if required
if args.skip is not None:
cap.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, args.skip)
# Otherwise assume it is a format string for reading images
else:
cap = util.FileVideoCapture(args.inputpath)
# Skip first frames if required
if args.skip is not None:
cap.frame = 1 + args.skip
# Check if videocapture is working
if not cap.isOpened():
print 'Unable to open video input.'
sys.exit(1)
# Read first frame
status, im0 = cap.read()
im_gray0 = cv2.cvtColor(im0, cv2.COLOR_BGR2GRAY)
im_draw = np.copy(im0)
# Getting initial bounding boxes
tl1 = [405, 160]
br1 = [450, 275]
VARtracker.initialise(CMT1, im_gray0, tl1, br1)
frame = 1
while True:
pool = Pool(processes=4)
print frame
# Read image
status, im = cap.read()
if not status:
break
im_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
im_draw = np.copy(im)
tic = time.time()
# Serial approach
#res1 = VARtracker.process_frame(CMT1, im_gray)
# Parallel approach
res1 = pool.apply_async(VARtracker.process_frame, (CMT2, im_gray))
pool.close()
pool.join()
res1 = res1.get()
toc = time.time()
# Display results
if res1.has_result:
cv2.line(im_draw, res1.tl, res1.tr, (255, 0, 0), 4)
cv2.line(im_draw, res1.tr, res1.br, (255, 0, 0), 4)
cv2.line(im_draw, res1.br, res1.bl, (255, 0, 0), 4)
cv2.line(im_draw, res1.bl, res1.tl, (255, 0, 0), 4)
if not args.quiet:
cv2.imshow('main', im_draw)
cv2.waitKey(pause_time)
# Remember image
im_prev = im_gray
frame += 1
Whenever I comment the Serial approach and attempt using threads (Parallel aproach), I come across the following error:
Traceback (most recent call last):
File "/home/rafael/GIT/CMT-Tracker/VaretoCMT/VARmain.py", line 128, in module res1 = res1.get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value
cPickle.PicklingError: Can't pickle : attribute lookup cv2.BRISK failed
The other files can be encountered on VARmain.py, VARtracker.py and util.py.
I've tried so many ways and I still haven't found a way to overcome this Python limitation. I found out that I cannot serialize class methods, only functions. If possible, I would like to solve it using Python standard libraries.
Upvotes: 3
Views: 1223
Reputation: 304
I managed to solve it. Thanks to @Matt and @Yamaneko. Basically, I moved the block that reads the image into the worker function. Therefore, if the pool size = 6 and there are six bounding boxes, each frame is going to be read six times (within each worker). That's the only way I have found to make it work.
Current version can be found here.
import cv2 as cv
import multiprocessing as mp
import time
def worker(folder_path, list_name, top_left, bot_right, index):
frame_path = folder_path + '/' + list_name[0]
image_0 = cv.imread(frame_path)
gray_0 = cv.cvtColor(image_0, cv.COLOR_BGR2GRAY)
cmt = VARtracker.CMT()
cmt.initialise(gray_0, top_left, bot_right)
box_queue = mp.Queue()
for name in list_name:
frame_path = folder_path + '/' + name
image_now = cv.imread(frame_path)
gray_now = cv.cvtColor(image_now, cv.COLOR_BGR2GRAY)
cmt.process_frame(gray_now)
if cmt.has_result:
print index, name, zip(cmt.tl, cmt.br)
output.put((index, name, zip(cmt.tl, cmt.br)))
print 'Process {} finished'.format(index)
def VARmethod(folder_path, final_frame, top_left, bot_right):
tic = time.time()
if len(top_left) == len(bot_right):
list_frame = [index for index in range(1, final_frame + 1)]
list_name = [str(index) + '.jpg' for index in list_frame]
pool = mp.Pool(5)
for index in range(0, len(top_left)):
pool.apply_async(worker, args=(folder_path, list_name, top_left[index], bot_right[index], index))
pool.close()
pool.join()
print 'Finished with the script'
toc = time.time()
print output.qsize()
print (toc - tic)
Upvotes: 4
Reputation: 2832
Try this code segment around your classes (this is not my code, credit to Steven Bethard) - this is a workaround to pickle classes; pickle is used by the multiprocessing module to send jobs to workers:
def _pickle_method(method):
func_name = method.im_func.__name__
obj = method.im_self
cls = method.im_class
return _unpickle_method, (func_name, obj, cls)
def _unpickle_method(func_name, obj, cls):
for cls in cls.mro():
try:
func = cls.__dict__[func_name]
except KeyError:
pass
else:
break
return func.__get__(obj, cls)
import copy_reg
import types
copy_reg.pickle(types.MethodType, _pickle_method, _unpickle_method)
Example using it with multiprocessing
here Can't pickle <type 'instancemethod'> when using python's multiprocessing Pool.map()
Not saying this would be easy to convert. If you really want multithreading, I suggest Cython with OpenMP. You can just rewrite the parts of the program that need to be parallel with nogil
statements and from cython.parallel cimport prange
for parallel loops...
Upvotes: 3