Tristan
Tristan

Reputation: 2088

Adding second hidden layer in Tensorflow breaks loss calculation

I'm am working on assignment three of the Udacity Deep Learning course. I have a working neural network with one hidden layer. However, when I add a second one, the loss results in nan.

This is the graph code:

num_nodes_layer_1 = 1024
num_nodes_layer_2 = 128
num_inputs = 28 * 28
num_labels = 10
batch_size = 128

graph = tf.Graph()
with graph.as_default():

    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, num_inputs))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)

    # variables
    # hidden layer 1
    hidden_weights_1 = tf.Variable(tf.truncated_normal([num_inputs, num_nodes_layer_1]))
    hidden_biases_1 = tf.Variable(tf.zeros([num_nodes_layer_1]))

    # hidden layer 2
    hidden_weights_2 = tf.Variable(tf.truncated_normal([num_nodes_layer_1, num_nodes_layer_2]))
    hidden_biases_2 = tf.Variable(tf.zeros([num_nodes_layer_2]))

    # linear layer
    weights = tf.Variable(tf.truncated_normal([num_nodes_layer_2, num_labels]))
    biases = tf.Variable(tf.zeros([num_labels]))

    # Training computation.
    y1 = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights_1) + hidden_biases_1)
    y2 = tf.nn.relu(tf.matmul(y1, hidden_weights_2) + hidden_biases_2)
    logits = tf.matmul(y2, weights) + biases

    # Calc loss
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits))

    # Optimizer.
    # We are going to find the minimum of this loss using gradient descent.
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

    # Predictions for the training, validation, and test data.
    # These are not part of training, but merely here so that we can report
    # accuracy figures as we train.
    train_prediction = tf.nn.softmax(logits)

    y1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, hidden_weights_1) + hidden_biases_1)
    y2_valid = tf.nn.relu(tf.matmul(y1_valid, hidden_weights_2) + hidden_biases_2)
    valid_prediction = tf.nn.softmax(tf.matmul(y2_valid, weights) + biases)

    y1_test = tf.nn.relu(tf.matmul(tf_test_dataset, hidden_weights_1) + hidden_biases_1)
    y2_test = tf.nn.relu(tf.matmul(y1_test, hidden_weights_2) + hidden_biases_2)
    test_prediction = tf.nn.softmax(tf.matmul(y2_test, weights) + biases)

It does not give an error. But after the first time, the loss is unable to print and it doesn't learn.

Initialized
Minibatch loss at step 0: 2133.468750
Minibatch accuracy: 8.6%
Validation accuracy: 10.0%
Minibatch loss at step 400: nan
Minibatch accuracy: 9.4%
Validation accuracy: 10.0%
Minibatch loss at step 800: nan
Minibatch accuracy: 11.7%
Validation accuracy: 10.0%
Minibatch loss at step 1200: nan
Minibatch accuracy: 4.7%
Validation accuracy: 10.0%
Minibatch loss at step 1600: nan
Minibatch accuracy: 7.8%
Validation accuracy: 10.0%
Minibatch loss at step 2000: nan
Minibatch accuracy: 6.2%
Validation accuracy: 10.0%
Test accuracy: 10.0%

When I remove the second layer it trains and I get an accuracy of about 85%. With a second layer I would suspect the score to be between 80% and 90%.

Am I using the wrong optimizer? Is it just something stupid I missed?

This is the session code:

num_steps = 2001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    print("Initialized")
    for step in range(num_steps):
        # Pick an offset within the training data, which has been randomized.
        # Note: we could use better randomization across epochs.
        offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
        # Generate a minibatch.
        batch_data = train_dataset[offset:(offset + batch_size), :]
        batch_labels = train_labels[offset:(offset + batch_size), :]
        # Prepare a dictionary telling the session where to feed the minibatch.
        # The key of the dictionary is the placeholder node of the graph to be fed,
        # and the value is the numpy array to feed to it.
        feed_dict = {
            tf_train_dataset : batch_data, 
            tf_train_labels : batch_labels,
        }
        _, l, predictions = session.run(
          [optimizer, loss, train_prediction], feed_dict=feed_dict)
        if (step % 400 == 0):
            print("Minibatch loss at step %d: %f" % (step, l))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
    acc = accuracy(test_prediction.eval(), test_labels)
    print("Test accuracy: %.1f%%" % acc)

Upvotes: 1

Views: 511

Answers (1)

squadrick
squadrick

Reputation: 770

Your learning rate of 0.5 is too high, set it to 0.05 and it'll converge.

Minibatch loss at step 0: 1506.469238
Minibatch loss at step 400: 7796.088867
Minibatch loss at step 800: 9893.363281
Minibatch loss at step 1200: 5089.553711
Minibatch loss at step 1600: 6148.481445
Minibatch loss at step 2000: 5257.598145
Minibatch loss at step 2400: 1716.116455
Minibatch loss at step 2800: 1600.826538
Minibatch loss at step 3200: 941.884766
Minibatch loss at step 3600: 1033.936768
Minibatch loss at step 4000: 1808.775757
Minibatch loss at step 4400: 113.909866
Minibatch loss at step 4800: 49.800560
Minibatch loss at step 5200: 20.392700
Minibatch loss at step 5600: 6.253595
Minibatch loss at step 6000: 4.372780
Minibatch loss at step 6400: 6.862935
Minibatch loss at step 6800: 6.951239
Minibatch loss at step 7200: 3.528607
Minibatch loss at step 7600: 2.968611
Minibatch loss at step 8000: 3.164592
...
Minibatch loss at step 19200: 2.141401

Also a couple of pointers:

  1. tf_train_dataset and tf_train_labels should be tf.placeholders of shape [None, 784]. The None dimension allows you to vary the batch size during training, instead of being limited to a size number such as 128.

  2. Instead of using tf_valid_dataset and tf_test_dataset as tf.constant, just pass your validation and test datasets in the respective feed_dicts, this will allow you to get rid of the extra ops at the end of your graph for validation and test accuracy.

  3. I'd recommended sampling from a separate batch of validation and test data rather than using the same batch of data for each iteration of checking the val/test accuracy.

Upvotes: 3

Related Questions