user3503711
user3503711

Reputation: 2056

Tensorflow image prediction from disk: ValueError: Layer expects X input(s), but it received Y input tensors

I have already trained a model using CNN to classify images. I want to load images from a folder on my disk and predict which category it is. If I do the prediction for one image at a time, my code works. But when I use a loop to iterate through all images and predict the classes, it fails.

Below is my code:

test_data = []
test_path = "~/test2/"
IMG_SIZE = 100

for img in os.listdir(test_path):
    if ".jpg" in img:
        test_img = os.path.join(test_path,img)
        img_array = cv2.imread(test_img, cv2.IMREAD_GRAYSCALE) #loading the image
        new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) #resize the image
        new_array = np.asarray(new_array).reshape(-1, IMG_SIZE, IMG_SIZE, 1) #reshape as numpy array (same as training set)
        test_data.append([new_array])

But when i want to predict the model using:

pred = model.predict(test_data)

It is showing me the following error:

ValueError Traceback (most recent call last) in ----> 1 pred = model.predict(test_data) 2 print(pred[0])

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing) 1725 for step in data_handler.steps(): 1726
callbacks.on_predict_batch_begin(step) -> 1727 tmp_batch_outputs = self.predict_function(iterator) 1728 if data_handler.should_sync: 1729 context.async_wait()

ValueError: Layer sequential_4 expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 100) dtype=uint8>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 100) dtype=uint8>]

Any idea why it is showing an error while I am using the loop? In short, when there is more than one element in "test_data", the error occurs.

Thank you in advance.

Upvotes: 0

Views: 887

Answers (2)

user3503711
user3503711

Reputation: 2056

I found the solution. The trick was to convert the list to numpy array and reshape it after the loop (when all images are loaded).

test_data = []
test_path = "~/test2/"
IMG_SIZE = 100

for img in os.listdir(test_path):
    if ".jpg" in img:
        test_img = os.path.join(test_path,img)
        img_array = cv2.imread(test_img, cv2.IMREAD_GRAYSCALE) #loading the image
        new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) #resize the image
        test_data.append([new_array])

test_data = np.asarray(test_data).reshape(-1, IMG_SIZE, IMG_SIZE, 1)   
pred = model.predict(test_data)

Upvotes: 0

Loading from disk and predicting can be done efficiently with the data module in tensorflow. A full guide is available in the tensorflow documentation [1]. I have written a minimal example using your code for a data pipeline using a generator object and the corresponding tensorflow dataset generator [2].

TEST_PATH = "~/test2/"
IMAGE_SIZE = 100
IMAGE_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 1)
IMAGE_DTYPE = tf.float32

def load_data():
    for image in os.listdir(TEST_PATH):
        if ".jpg" in image:
            image_path = os.path.join(test_path,img) # Get image path
            image_array = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE) # Loading the image
            image_array = cv2.resize(image_array, (IMAGE_SIZE, IMAGE_SIZE)) # Resize the image
            image_array = np.asarray(new_array).reshape(IMAGE_SHAPE) # Reshape as numpy array (same as training set)
            yield image_array

output_signature = tf.TensorSpec(shape=IMAGE_SHAPE, dtype=IMAGE_DTYPE),
test_data = tf.data.Dataset.from_generator(load_data, output_signature=output_signature)
  1. https://www.tensorflow.org/guide/data
  2. https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator

Upvotes: 0

Related Questions