Reputation: 444
Error:
Traceback (most recent call last):
File "C:/Users/xx/abc/Final.py", line 167, in <module>
tf.app.run()
File "C:\Users\xx\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "C:/Users/xx/abc/Final.py", line 148, in main
hooks=[logging_hook])
File "C:\Users\xx\tensorflow\python\estimator\estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\xx\tensorflow\python\estimator\estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\xx\tensorflow\python\estimator\estimator.py", line 856, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "C:\Users\xx\tensorflow\python\estimator\estimator.py", line 831, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:/Users/xx/abc/Final.py", line 61, in cnn_model_fn
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
File "C:\Users\xx\tensorflow\python\ops\losses\losses_impl.py", line 853, in sparse_softmax_cross_entropy
name="xentropy")
File "C:\Users\xx\tensorflow\python\ops\nn_ops.py", line 2046, in sparse_softmax_cross_entropy_with_logits
logits.get_shape()))
ValueError: Shape mismatch: The shape of labels (received (100,)) should equal the shape of logits except for the last dimension (received (300, 10)).
Train input function:
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
ALL DATASET SHAPES
print(train_data.shape)
//Output: (9490, 2352)
train_labels = np.asarray(label_MAX[0], dtype=np.int32)
print(train_labels.shape)
//Output: (9490,)
eval_data = datasets[1] # Returns np.array
print(eval_data.shape)
//Output: (3175, 2352)
eval_labels = np.asarray(label_MAX[1], dtype=np.int32)
print(eval_labels.shape)
//Output: (3175,)
I read other StackOverflow questions and most of them pointed to the calculation of the loss function as the point of error. The fact that the code sends a batch of 100 labels is causing an issue?
How can I resolve this? Is the fact that the number of images and labels not being a multiple of 100 the root of this issue?
My model is being trained for only 0 and 1 So I suppose I must make a change to this
logits = tf.layers.dense(inputs=dropout, units=10)
and change number of units to 2?
Upvotes: 3
Views: 9665
Reputation: 493
I got the same error. I realized that I didn't Flatten my Image data. Once I included the Flatten() layer I am able to process the neural network properly. Could you try adding a Flatten Layer before the Dense Layers?
Upvotes: 3
Reputation: 4542
The issue comes form the fact that you are using RGB images. The model is designed to be used with grayscale images as shown in the line input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
near the top of the CNN definition. Having 3 channels instead of 1 means that the batch size here will be three times too large.
To fix that, change that line to input_layer = tf.reshape(features["x"], [-1, 28, 28, 3])
.
Upvotes: 3