Reputation: 145
My question is about caffe test result. Python script result is not equal to caffe test result. I used Alexnet and my test accuracy is 0,9033.
Caffe test accuracy: 0.9033
Python accuracy: 0.8785
I used 40000 images to test. The number of misclassify images should be 3868. But the number of misclassify images in my python result is 4859. What is the problem?
Thank you.
Here is my caffe test command:
…/build/tools/caffe test --model …/my_deploy.prototxt --weights …/alex_24_11__iter_200000.caffemodel -gpu 0 -iterations 800
After that, I found and try a python script with my test data but I don’t get the same result. I used this script on another dataset before and I got the same accuracy with my caffe test but I didn’t use mean file neither during train nor during test. But now I used mean file both train and test. May be there is a problem in mean file but I used everything that I found from the tutorials.
- I created lmdb.
- I used compute_image_mean to create mean file from lmdb. The size of images in lmdb is 256x256.
- I used 227x227 images in alexnet.
Python script:
caffe.set_mode_gpu()
model_def = '…/my_deploy.prototxt'
model_weights = '… /alex_24_11__iter_200000.caffemodel'
net = caffe.Net(model_def, model_weights, caffe.TEST)
blob = caffe.proto.caffe_pb2.BlobProto()
data = open( '.../image_mean.binaryproto' , 'rb' ).read()
blob.ParseFromString(data)
arr = np.array( caffe.io.blobproto_to_array(blob) )
out = arr[0]
np.save( '.../imageMean.npy' , out )
mu = np.load('…/imageMean.npy')
mu = mu.mean(1).mean(1)
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', mu)
transformer.set_raw_scale('data', 255)
transformer.set_channel_swap('data', (2,1,0))
net.blobs['data'].reshape(1, 3, 227, 227)
f = open('…/val.txt', 'r')
f2 = open('…/result.txt', 'a')
for x in range(0,40000):
a=f.readline()
a=a.split(' ')
image = caffe.io.load_image('… /'+a[0])
transformed_image = transformer.preprocess('data', image)
net.blobs['data'].data[...] = transformed_image
output = net.forward()
output_prob = output['prob'][0]
f2.write(str(a[0]))
f2.write(str(' '))
f2.write(str(output_prob.argmax()))
f2.write('\n')
First layer of my deploy.prototxt
layer {
name: "input"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } }
}
Last layer of my deploy.prototxt
layer {
name: "prob"
type: "Softmax"
bottom: "fc8-16"
top: "prob"
}
The other layers are equal to train_val.prototxt.
Upvotes: 1
Views: 690
Reputation: 66
Check that your preprocessing is the same when creating the LMDB and processing the test data.
For example, if you use :
transformer.set_channel_swap('data', (2,1,0))
You should ensure that your LMDB also swapped these channels (I assume this is a RGB to BGR conversion).
Especially, you say that you used the mean image during training. However, in your Transformer
, you are computing and substracting the mean pixel. This could explain the small difference between your two accuracies.
Upvotes: 0