xpirad
xpirad

Reputation: 21

tensorflow TypeError: Fetch argument None has invalid type <class 'NoneType'>

I was doing cs231n assignment 2 and encountered this problem.

I'm using tensorflow-gpu 1.5.0

Code as following

# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)

# define model
def complex_model(X,y,is_training):
    pass

y_out = complex_model(X,y,is_training)

# Now we're going to feed a random batch into the model 
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
    with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
    tf.global_variables_initializer().run()

    ans = sess.run(y_out,feed_dict={X:x,is_training:True})
    %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
    print(ans.shape)
    print(np.array_equal(ans.shape, np.array([64, 10])))

Complete traceback

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-97f0b6c5a72e> in <module>()
      6         tf.global_variables_initializer().run()
      7 
----> 8         ans = sess.run(y_out,feed_dict={X:x,is_training:True})
      9         get_ipython().run_line_magic('timeit',     'sess.run(y_out,feed_dict={X:x,is_training:True})')
     10         print(ans.shape)

c:\users\kasper\appdata\local\programs\python\python36\lib\site-    packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    893     try:
    894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
    896       if run_metadata:
    897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1111     # Create a fetch handler to take care of the structure of fetches.
   1112     fetch_handler = _FetchHandler(
-> 1113         self._graph, fetches, feed_dict_tensor,     feed_handles=feed_handles)
   1114 
   1115     # Run request and get response.

c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
    419     with graph.as_default():
--> 420       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
    421     self._fetches = []
    422     self._targets = []

c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
    235     if fetch is None:
    236       raise TypeError('Fetch argument %r has invalid type %r' %
--> 237                       (fetch, type(fetch)))
    238     elif isinstance(fetch, (list, tuple)):
    239       # NOTE(touts): This is also the code path for namedtuples.

TypeError: Fetch argument None has invalid type <class 'NoneType'>

I saw that similar questions have been asked on this site before,but those don't seem to solve mine.

Any help would be appreciated,thanks!

Upvotes: 1

Views: 3867

Answers (2)

Bedir Yilmaz
Bedir Yilmaz

Reputation: 4083

I believe that mrry is right.

If you give a second look the the notebook Assignment 2 - Tensorflow.ipynb, you will notice the description cell as follows :

Training a specific model

In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.

Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:

7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs

Which is asking you to define a model inside the function

# define model
def complex_model(X,y,is_training):
    pass

Just like they did in

def simple_model(X,y):
    # define our weights (e.g. init_two_layer_convnet)

    # setup variables
    Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
    bconv1 = tf.get_variable("bconv1", shape=[32])
    W1 = tf.get_variable("W1", shape=[5408, 10])
    b1 = tf.get_variable("b1", shape=[10])

    # define our graph (e.g. two_layer_convnet)
    a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
    h1 = tf.nn.relu(a1)
    h1_flat = tf.reshape(h1,[-1,5408])
    y_out = tf.matmul(h1_flat,W1) + b1
    return y_out

Hope this helps!

Upvotes: 1

mrry
mrry

Reputation: 126154

The problem is that the y_out argument to sess.run() is None, whereas it must be a tf.Tensor (or tensor-like object, such as a tf.Variable) or a tf.Operation.

In your example, y_out is defined by the following code:

# define model
def complex_model(X,y,is_training):
    pass

y_out = complex_model(X,y,is_training)

complex_model() doesn't return a value, so y_out = complex_model(...) will set y_out to None. I'm not sure if this function is representative of your real code, but it's possible that your real complex_model() function is also missing a return statement.

Upvotes: 2

Related Questions