Reputation: 435
I'm attempting to scale down an image using the Python OpenCV bindings (CV2, the new bindings):
ret, frame = cap.read()
print frame.shape
# prints (720, 1280, 3)
smallsize = (146,260)
smallframe = cv2.resize(frame, smallsize)
print smallframe.shape
# prints (260, 146, 3)
As you can see, the dimensions somehow end up being flipped on the scaled down image. Instead of returning an image with dimensions (WxH) 146x260, I get 260x146.
What gives?
Upvotes: 21
Views: 5822
Reputation: 7545
This was answered long ago but never accepted. Let me explain just a little more for anyone else who gets confused by this. The problem is that in python, OpenCV uses numpy. Numpy array shapes, functions, etc. assume (height, width) while OpenCV functions, methods, etc. use (width, height). You just need to pay attention.
cv2.anything()
--> use (width, height)
image.anything()
--> use (height, width)
numpy.anything()
--> use (height, width)
Upvotes: 36
Reputation: 8033
Because the size takes the columns first, and the first dimension of the matrix is the rows. Have a look at the documentation here.
Upvotes: 2