Reputation: 571
In the code below, I am attempting to output a single face (cropped from a larger image) with CV2:
def machine_pst():
mlimg = request.files.get("mlimg")
fname = mlimg.filename
filepath = "/home/assets/faces/"
mlimg.save(filepath + fname, overwrite = True)
full_path = filepath + fname
cascPath = "/home/assets/haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(cascPath)
faceSamples=[]
pilImage=Image.open(full_path).convert('L')
imageNp=np.array(pilImage,'uint8')
faces=detector.detectMultiScale(imageNp)
for (x,y,w,h) in faces:
faceSamples.append(imageNp[y:y+h,x:x+w])
img = Image.fromarray(faceSamples[0], 'RGB')
cv2.imwrite("/home/assets/faces/read.png", img)
source = "/static/faces/read.png"
return template("home/machineout", source = source)
With source being passed as a parameter into img src="{{source}}
If I return the length of faces in an image with 3 faces, I get "3", so that seems to work nicely and if I return any index of faceSamples (e.g. faceSamples[0]), I get data returned as well, but when I try to turn that face sample into an image using ...
img = Image.fromarray(faceSamples[0], 'RGB')
I get a ValueError that there is "not enough image data"
I understand (from a previous answer) that detectMultiScale returns rectangles, not images, but with my additional Numpy code, is that still the case? Am I still not fully understanding what the faceSamples array is returning? Can this not be directly turned back into an RGB image with the last snippet of code?
Upvotes: 3
Views: 5850
Reputation: 150735
Your problem is here:
pilImage=Image.open(full_path).convert('L')
imageNp=np.array(pilImage,'uint8')
That is, you converted imageNp
into a single channel, gray image. Then it makes little sense to do
img = Image.fromarray(faceSamples[0], 'RGB')
as faceSamples[0]
is also a gray image.
Also, like @MarkSetchell's comment, you can use cv2.imread
and other functions instead of PIL
. They are more compatible with other openCV functions.
Upvotes: 3