Pavan Yeddanapudi
Pavan Yeddanapudi

Reputation: 61

same predictions for all the inputs with a fine tuned inception v3 model

I am trying to fine tune an inception v3 model with 2 categories . These are the steps i followed 1. created sharded files from custom data using build_image_data.py by changing the number of classes and examples in imagenet_data.py. Used a labelsfile.txt ; 2. changed the values accordingly in flowers_data.py and using flowers_train.py I trained the model. ; 3. I froze the model and got protobuf file. ; 4. My input node (x) expects a batch of size 32 and size 299x299x3 so I hacked my way by duplicating my test image 32 times and created an input batch ; 5. using input&output nodes, input batch and the script below, I am able to print the scores of predictions

image_data = create_test_batch(args.image_name)
graph=load_graph(args.frozen_model_filename)
x = graph.get_tensor_by_name('prefix/batch_processing/Reshape:0')
y = graph.get_tensor_by_name('prefix/tower_0/logits/predictions:0')
with tf.Session(graph=graph) as sess:
    y_out=sess.run(y, feed_dict={x:image_data})
    print(y_out)

I got the result which looks like:

[[ 0.02264258  0.16756369  0.80979371][ 0.02351799  0.16782859  0.80865341].... [ 0.02205461  0.1794569   0.7984885 ][ 0.02153662  0.16436867  0.81409472]](32)

For any image as input, I have been getting the maximum score only in column 3 which means I'd get the same prediction for any input.

Is there any point which I am missing in my process? Can anyone help me with this issue? I am using python 2.7 in ubuntu 16.04 in cloudVM

Upvotes: 1

Views: 421

Answers (2)

megh_sat
megh_sat

Reputation: 424

Hi I was also facing the same problem but I found out that I didn't preprocess my test set in the same way that I preprocessed my train set, in my case I fixed the problem by following the same preprocessing steps for both of my test and train set. This is the problem in mine: Training:

train_image=[]
for i in files:
  img=cv2.imread(i)
  img=cv2.resize(img,(100,100))
  img=image.img_to_array(img)
  img=img/255
  train_image.append(img)
X=np.array(train_image)

But while preprocessing the test I forgot to normalize my "img" (means doing img=img/255) so later adding this img=img/255 step in my test set I solved my problem.

Test Set:

  img=cv2.imread(i)
  img=cv2.resize(img,(100,100))
  img=image.img_to_array(img)
  img = img/255
  test_image.append(img)

Upvotes: 2

Mengshan
Mengshan

Reputation: 21

I had a similar issue recently and what I found out was I pre-processed my test data differently from the method in the fine tune program I was using. Basically the data ranges are different - for training images, the pixels range from 0 to 255 while for test images, the pixels range from 0 to 1. That's why when I fed my test data into the model, the model will output the same predictions all the time, since the pixels of the test data are in such a small range that it doesn't make any difference to the model.

Hope that helps even though it might not be your case.

Upvotes: 0

Related Questions