Reputation: 23
What I want to achieve is to capture my screen in real time and have it detect when a certain image is shown inside the frame. What I've come up with so far is:
Screen Capture:
last_time = time.time()
while(True):
screen = np.array(ImageGrab.grab(bbox=(0,40, 800, 640)))
print('Loop took {} seconds'.format(time.time()-last_time))
last_time = time.time()
cv2.imshow('window', cv2.cvtColor(screen, cv2.COLOR_BGR2RGB))
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
Template Matching:
import cv2
import numpy as np
img_rgb = cv2.imread('frame.png')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('template.png',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,255,255), 2)
cv2.imshow('Detected',img_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I got both of them working separately but cant manage to fuse them together. What I mainly struggled with was to imread() the current frame, as it returns as an nparray from the capturing, while cv2.imread() requires a picture file (png., jpg. ect.)
Upvotes: 2
Views: 1636
Reputation: 93410
while(True)
loop;screen
from RGB to GRAY;screen
, not on its grayscale counterpart.Upvotes: 1