Reputation: 61
I'm working on neural networking by sentdex's tutorial. Here is my code -almost as same as him- but it raises an unexpected error.
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
n_classes = 0
batch_size = 100
x = tf.placeholder('float',[None, 784])
y = tf.placeholder('float')
def neural_model(impuls):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
'biases':tf.Variable(tf.random_normal([n_classes]))}
l1 = tf.add(tf.matmul(impuls, hidden_1_layer['weights']), hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']) , hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']) , hidden_3_layer['biases'])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3, output_layer['weights']) + output_layer['biases']
return output
def train_neural_network(x):
global y
prediction = neural_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num_examples/batch_size)):
x, y = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x:x, y:y})
epoch_loss += c
print('Epoch: ', epoch, 'completed out of', hm_epochs, 'loss: ', epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y,1))
ac = tf.reduce_mean(tf.cast(correct, 'float'))
print('acc: ', ac.eval({x:mnist.test_images, y:mnist.test_labels}))
train_neural_network(x)
I think first part of code is not necessary but whatever. It raises that error in line 53:
_, c = sess.run([optimizer, cost], feed_dict={x:x, y:y})
TypeError: unhashable type: 'numpy.ndarray'
In sentdex's tutorial his code is different from it and he got no error -as the other guys who watched that as i can see on comments. Do I something wrong? What must I do? Thanks for help
**EDIT:**I just solved the first error, but now it raises this...
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add, concat)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "a.py", line 61, in <module>
train_neural_network(x)
File "a.py", line 53, in train_neural_network
_, c = sess.run([optimizer, cost], feed_dict={x:_x, y:_y})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add, concat)]]
Caused by op 'Reshape', defined at:
File "a.py", line 61, in <module>
train_neural_network(x)
File "a.py", line 41, in train_neural_network
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py", line 1776, in softmax_cross_entropy_with_logits
precise_logits = _flatten_outer_dims(precise_logits)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py", line 1551, in _flatten_outer_dims
output = array_ops.reshape(logits, array_ops.concat([[-1], last_dim_size], 0))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3938, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add, concat)]]
Upvotes: 0
Views: 3122
Reputation: 527
As for the second InvalidArgumentError
, you need to change
n_classes = 0
to n_classes = 10
,
otherwise reshaping prediction
which is of size (100,0) to (100,10) causes error.
Also a learning_rate=0.05
and hm_epochs=10
will do nearly 89% accuracy.
Upvotes: 0
Reputation: 19123
You define the placeholders x
and y
at the top, but then in train_neural_network
you redefine them as numpy.ndarray
s (which are not hashable) when you call x, y = mnist.train.next_batch(batch_size)
.
Change that as follows:
_x, _y = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x:_x, y:_y})
and the problem should go away.
Upvotes: 1