Yashwanth
Yashwanth

Reputation: 27

ValueError: Data cardinality is ambiguous: x sizes: 150000 y sizes: 50000 Make sure all arrays contain the same number of samples

Hi I have using this code and getting error

ValueError: Data cardinality is ambiguous: x sizes: 150000y sizes: 50000
Make sure all arrays contain the same number of samples.

I tried changing the reshape option and even numpy.transpose but no use can anyone help?

import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D

(x_train, y_train) , (x_test, y_test) = datasets.cifar10.load_data()

#x_train.shape #(50000, 32, 32, 3) 
#x_test.shape  #(10000, 32, 32, 3)


x_train = x_train.reshape(-1, 32, 32, 1)
x_test = x_test.reshape(-1, 32, 32 ,1)


x_train = x_train.astype('float32')         # change integers to 32-bit floating point numbers 
x_test = x_test.astype('float32')
x_train /= 255.0              
x_test /= 255.0


model = tf.keras.models.Sequential() 
model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2))) 
model.add(tf.keras.layers.Conv2D(64, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2))) 
model.add(tf.keras.layers.Conv2D(128, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(256, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))

model.add(tf.keras.layers.Flatten()) 
model.add(tf.keras.layers.Dense(512, activation=tf.nn.relu)) 
model.add(tf.keras.layers.Dense(512, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax)) 
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) 
model.build(input_shape=(512,32,32,1)) 
model.summary() 

model.fit(x_train, y_train, batch_size=1000, epochs=1) 




score = model.evaluate(x_test, y_test) 
print('Test loss:', score[0]) 
print('Test accuracy:', score[1])


predictions = model.predict([x_test])
#print(predictions)

print(np.argmax(predictions[0]))

img_path = x_test[0]
print(img_path.shape)
if(len(img_path.shape) == 3):
plt.imshow(np.squeeze(img_path))
elif(len(img_path.shape) == 2):
plt.imshow(img_path)
else:
print("Higher dimensional data")

Upvotes: 2

Views: 5829

Answers (1)

Abhishek Prajapat
Abhishek Prajapat

Reputation: 1878

there are some changes you would have to make. I will write an example for you

import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D

(x_train, y_train) , (x_test, y_test) = datasets.cifar10.load_data()
x_train = x_train.astype('float32')         
x_test = x_test.astype('float32')
x_train /= 255.0              
x_test /= 255.0

model = tf.keras.models.Sequential() 
model.add(tf.keras.layers.InputLayer(input_shape=(32,32,3)))
model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', activation='relu')) 
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2))) 
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax)) 

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) 

model.summary() 

model.fit(x_train, y_train, batch_size=32, epochs=1) 

Changes:

  1. you don't need to reshape x_train, x_test as they are already in correct shape.
  2. It is always good to use tf.keras.layers.InputLayer instead of building the model later.
  3. I haven't made that change but whenever possible you should use tf.keras.Sequential to make models. (more readable, less prone to error). Functional api (or your current method) is for when you need to make some complex architecture.
  4. You can now increase the model (more layers). I used a few just to show you an example.
  5. The input_shape in the InputLayer is considered as (batch_size, img_width, img_height, img_channels) as the batch_size could be non-uniform and hence taken as None by default so we don't give it and hence we only pass (img_width, img_height, img_channels) and as your input has 32 imgwith 32 imgheight and 3 channels we pass it (32, 32, 3).

If it solved your issue then kindly upvote or give a green tick.

Upvotes: 1

Related Questions