Reputation: 317
I am working on a convolutional neural network based in tensorflow. To handle the transformations, SciKit functions reshape the original images. I have an unexpected situation where:
def read_img(file):
img = skimage.io.imread(img_folder + file)
print(img.shape)
img = skimage.transform.resize(img, (img_width, img_height), mode=mode)
return img[:,:,:img_channels]
stops the model from forming as the following traceback:
File "A:\anoth\...\newmodel.py", line 76, in read_img
img = skimage.transform.resize(img, (img_width, img_height), mode=mode)
File "A:\anoth\...\\skimage\transform\_warps.py", line 124, in resize
raise ValueError("len(output_shape) cannot be smaller than the image "
ValueError: len(output_shape) cannot be smaller than the image dimensions
The print of the input variable (print(img.shape)
) demonstrates that there is an image entering the model that consists of 4 dimensions
(2, 480, 720, 3)
when the previous files have 3 like:
(480, 720, 3)
What might be happening here? What is this 4th dimension when the inputs are all images?
Upvotes: 3
Views: 1517