Juanvulcano
Juanvulcano

Reputation: 1396

Numpy array of images wrong dimension Python and Keras

I'm building an image classifier and trying to compute the features for a dataset using keras but my array dimension is not on the right format. I'm getting

ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (324398, 1)

My code is this:

import glob
from keras.applications.resnet50 import ResNet50

def extract_resnet(X):  
    # X : images numpy array
    resnet_model = ResNet50(input_shape=(image_h, image_w, 3), 
    weights='imagenet', include_top=False)  # Since top layer is the fc layer used for predictions
    features_array = resnet_model.predict(X)
    return features_array
filelist = glob.glob('dataset/*.jpg')
myarray = np.array([np.array(Image.open(fname)) for fname in filelist])
print(extract_resnet(myarray))

So it looks like for some reason the images array is only two dimensional when it should be 4 dimensional. How can I convert myarray so that it is able to work with the feature extractor?

Upvotes: 2

Views: 1869

Answers (1)

Maxim
Maxim

Reputation: 53758

First up, make sure that all of the images in dataset directory have the same size (image_h, image_w, 3):

print([np.array(Image.open(fname)).shape for fname in filelist])

If they are not, you won't be able to make a mini-batch, so you'll need to select only the subset of suitable images. If the size is right, you can then reshape the array manually:

myarray = myarray.reshape([-1, image_h, image_w, 3])

... to match ResNet specification exactly.

Upvotes: 2

Related Questions