mangate
mangate

Reputation: 658

Error when using Inception on TensorFlow (Same output for all pictures)

I'm trying to traing a network on cifar-10 dataset, but on instead of using the pictures I want to use the features from Inceptions' one before last layer.

So I wrote a little peace pf code to pass all the pictures in Inception and get the features, here it is:

def run_inference_on_images(images):
 #Creates graph from saved GraphDef.
 create_graph()

 features_vec = np.ndarray(shape=(len(images),2048),dtype=np.float32)

 with tf.Session() as sess:
   # Some useful tensors:
   # 'pool_3:0': A tensor containing the next-to-last layer containing 2048
   #   float description of the image.
   # 'DecodeJpeg:0': A numpy array of the image
   # Runs the softmax tensor by feeding the image data as input to the graph.
   length = len(images)
   for i in range(length):
       print ('inferencing image number',i,'out of', length)
       features_tensor = sess.graph.get_tensor_by_name('pool_3:0')
       features = sess.run(features_tensor,
                        {'DecodeJpeg:0': images[i]})
       features_vec[i] = np.squeeze(features)
 return features_vec

"images" is the CIFAR-10 dataset. It's a numpy array with shape (50000,32,32,3)

The problem I'm facing is that "features" outputs is alwayes the same even when I feed different pictures to the "sess.run" part. Am I missing something?

Upvotes: 2

Views: 365

Answers (2)

mangate
mangate

Reputation: 658

I was able to solve this issue. It seems that Inception doesn't work with numPy arrays like I thought, so I coverted the array to a JPEG picture and only then fed it to the network.

Below is the code which works (rest is the same):

def run_inference_on_images(images):
  # Creates graph from saved GraphDef.
  create_graph()

  features_vec = np.ndarray(shape=(len(images),2048),dtype=np.float32)
  with tf.Session() as sess:
    features_tensor = sess.graph.get_tensor_by_name('pool_3:0')
    length = len(images)
    for i in range(length):
        im = Image.fromarray(images[i],'RGB')
        im.save("tmp.jpeg")
        data = tf.gfile.FastGFile("tmp.jpeg", 'rb').read()
        print ('inferencing image number',i,'out of', length)
        features = sess.run(features_tensor,
                        {'DecodeJpeg/contents:0': data})
        features_vec[i] = np.squeeze(features)       
   return features_vec

Upvotes: 2

Phillip Bock
Phillip Bock

Reputation: 1889

Not sure. But you might try to move your line

features_tensor = sess.graph.get_tensor_by_name('pool_3:0')

as

features_tensor = tf.get_tensor_by_name('pool_3:0')

out from the inference part to the model creation part

Upvotes: 0

Related Questions