K K
K K

Reputation: 75

Tensorflow slim and assertion error

I am new to tensorflow and have been experimenting with slim experiment. I have tried to translate the MNIST tutorial in the tensorflow tutorials to the slim syntax. It had been working ok with an unbatched set of images fed to the model. I then added a tf.train_batch thread to the code and it stopped working when i run the entire file. giving the error

Traceback (most recent call last):
  File ".../slim.py", line 43, in <module>
    train_op = slim.learning.create_train_op(loss, optimiser)
  File "...\Python\Python35\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 442, in create_train_op
    assert variables_to_train
AssertionError

However, i can selectively re run create_train_op line and then train the model, although the loss function does not decrease here and essentially it doesn't work. This still allows me to get the graph visualisation from tensorboard (attached below) and I cannot see any errors in this.

I know I am doing something wrong, but am not sure where it is.

import tensorflow as tf
import time
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow.contrib.slim as slim


def model(inputs, is_training=True):
    end_points = {}
    with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu,
                        weights_initializer=tf.truncated_normal_initializer(stddev=0.1)):
        net = slim.conv2d(inputs, 32, [5, 5], scope="conv1")
        end_points['conv1'] = net
        net = slim.max_pool2d(net, [2, 2], scope="pool1")
        end_points['pool1'] = net
        net = slim.conv2d(net, 64, [5, 5], scope="conv2")
        end_points['conv2'] = net
        net = slim.max_pool2d(net, [2, 2], scope="pool2")
        end_points['pool2'] = net
        net = slim.flatten(net, scope="flatten")
        net = slim.fully_connected(net, 1024, scope="fc1")
        end_points['fc1'] = net
        net = slim.dropout(net, keep_prob=0.75, is_training=is_training, scope="dropout")
        net = slim.fully_connected(net, 10, scope="final", activation_fn= None)
        end_points['final'] = net
    return net, end_points

mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
batch = mnist.train.next_batch(20000)

x_image = tf.reshape(batch[0], [-1,28,28,1])
label = tf.one_hot(batch[1], 10)

image, labels = tf.train.batch([x_image[0], label[0]], batch_size= 100)

with tf.Graph().as_default():
    tf.logging.set_verbosity(tf.logging.DEBUG)
    logits, _ = model(image)
    predictions = tf.nn.softmax(logits)
    loss = slim.losses.softmax_cross_entropy(predictions, labels)
    config = tf.ConfigProto()
    optimiser = tf.train.AdamOptimizer(1e-4)
    train_op = slim.learning.create_train_op(loss, optimiser)
    thisloss = slim.learning.train(train_op, "C:/temp/test2", number_of_steps=100, save_summaries_secs=30, session_config=config)

enter image description here

Upvotes: 0

Views: 1107

Answers (1)

Sergio Guadarrama
Sergio Guadarrama

Reputation: 257

You need to create all ops under the same graph, that includes the input data

with tf.Graph().as_default():
  mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
  batch = mnist.train.next_batch(20000)

  x_image = tf.reshape(batch[0], [-1,28,28,1])
  label = tf.one_hot(batch[1], 10)

  image, labels = tf.train.batch([x_image[0], label[0]], batch_size= 100)

  tf.logging.set_verbosity(tf.logging.DEBUG)
  logits, _ = model(image)
  predictions = tf.nn.softmax(logits)
  loss = slim.losses.softmax_cross_entropy(predictions, labels)
  config = tf.ConfigProto()
  optimiser = tf.train.AdamOptimizer(1e-4)
  train_op = slim.learning.create_train_op(loss, optimiser)
  thisloss = slim.learning.train(train_op, "C:/temp/test2", number_of_steps=100, save_summaries_secs=30, session_config=config)

Upvotes: 1

Related Questions