Pratyush
Pratyush

Reputation: 480

Tensorflow: Cannot interpret feed_dict key as Tensor

I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.

The inputs are images of size 28 * 28. In the following code I get the error in line

_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.

Here is the code I have written

#Initialize

batch_size = 128

layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001

#Create neural network model
def create_model(inp, w, b):
    layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
    layer1 = tf.nn.relu(layer1)
    layer2 = tf.matmul(layer1, w['w2']) + b['b2']
    return layer2

#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))

w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}

init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)

tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)

model = create_model(x, w, b)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))    
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

#Process
with tf.Session(graph=graph1) as sess:
    tf.initialize_all_variables().run()
    total_batch = int(train_dataset.shape[0] / batch_size)

    for epoch in range(num_steps):    
        loss = 0
        for i in range(total_batch):
            batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]

            _, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
            loss = loss + c
        loss = loss / total_batch
        if epoch % 500 == 0:
            print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
            print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
            valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
            print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
    test_prediction = tf.run(tf_test_dataset,  {x: tf_test_dataset})
    print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

Upvotes: 31

Views: 51980

Answers (8)

Jed
Jed

Reputation: 2090

Similar to @javan-peymanfard and @hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:

@app.post('/endpoint_name')
async def endpoint_function():
    # Do stuff here, including possibly (re)loading the model

Upvotes: 0

Prajwol Lamichhane
Prajwol Lamichhane

Reputation: 31

You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.

Implement this at the topmost block of Notebook file:

  1. from keras import backend as K
  2. K.clear_session()

Upvotes: 2

Ahmadali Shafiee
Ahmadali Shafiee

Reputation: 4657

I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it

Upvotes: 5

Ashar Siddiqui
Ashar Siddiqui

Reputation: 91

In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:

# Declare this as global:

global graph

graph = tf.get_default_graph()

# Then just before you call in your model, use this

with graph.as_default():

# call you models here

Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.

Hope that helps.

Upvotes: 3

yunus
yunus

Reputation: 2545

This worked for me

from keras import backend as K

and after predicting my data i inserted this part of code then i had again loaded the model.

K.clear_session()

i faced this problem in production server, but in my pc it was running fine

...........

from keras import backend as K

#Before prediction
K.clear_session()

#After prediction
K.clear_session()

Upvotes: 70

dopexxx
dopexxx

Reputation: 2636

The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:

with tf.Session() as sess:
    sess.run(logits, feed_dict=feed_dict) 

sess.run(logits, feed_dict=feed_dict)

If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.

Upvotes: 2

Javad Peymanfard
Javad Peymanfard

Reputation: 179

If you use django server, just runserver with --nothreading for example:

python manage.py runserver --nothreading  

Upvotes: 15

xiaoming-qxm
xiaoming-qxm

Reputation: 1828

Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,

# define a graph
graph1 = tf.Graph()
with graph1.as_default():
    # placeholder
    x = tf.placeholder(...)
    y = tf.placeholder(...)
    # create model
    model = create(x, w, b)

with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......

Upvotes: 18

Related Questions