Reputation: 820
I have trained a googlenet on Caffe and now I want to do testing, so I use a deploy.prototxt and the pretrained weights and assign them to Net. But I receive this error (interestingly after a message that says network is initialized)
I0927 17:51:41.171922 5336 net.cpp:255] Network initialization done.
I0927 17:51:41.195708 5336 net.cpp:744] Ignoring source layer label_imgdata_1_split
F0927 17:51:41.195746 5336 blob.cpp:496] Check failed: count_ == proto.data_size() (9408 vs. 0)
Apparently I can't copy paste the whole prototxts because of the character limit here. I am adding what it looks like without the body which is more or less the same (except phase: TRAIN and phase:TEST parts ofc). The body is identical to the example here: https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
One note: I read in hdf5 data during training, and I just use a python script during test (I perform the same preprocessing that I do while creating hdf5 data / so I don't use caffe's io.transform and I don't subtract the mean at all (works better this way)) -though the error is during the initialization and not read in data part
What my deploy looks like:
name: "GoogleNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 10 dim: 3 dim: 224 dim: 224 } }
}
.....
layer {
name: "loss3/classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
top: "loss3/classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 7
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "loss3/classifier"
top: "prob"
}
And here is how my train prototxt looks like:
name: "GoogleNet"
layer {
name: "imgdata"
type: "HDF5Data"
top: "label"
hdf5_data_param {
source: "/media/DATA/DetDataWOMeanSubt/train_h5_list.txt"
batch_size: 64
shuffle: true
}
include {
phase: TRAIN
}
}
layer {
name: "imgdata"
type: "HDF5Data"
top: "imgdata"
top: "label"
hdf5_data_param {
source: "/media/DATA/DetDataWOMeanSubt/eval_h5_list.txt"
batch_size: 128
shuffle: true
}
include {
phase: TEST
}
}
....
layer {
name: "loss3/classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
top: "loss3/classifier"
inner_product_param {
num_output: 7
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "loss3/classifier"
bottom: "label"
top: "loss"
loss_weight: 1
}
layer {
name: "accuracy/top-1"
type: "Accuracy"
include { phase: TEST }
bottom: "loss3/classifier"
bottom: "label"
top: "accuracy/top-1"
accuracy_param { top_k: 1 }
}
And here is how I initiliaze the network:
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
And I do get this warning before the Net is initialized ( it seems to continue initializing the network anyway)
DEPRECATION WARNING - deprecated use of Python interface
W0927 17:51:40.486548 5336 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W0927 17:51:40.486551 5336 _caffe.cpp:142] Net('/home/x/Desktop/caffe-caffe-0.16/models/bvlc_googlenet/deploy.prototxt', 1, weights='/home/x/Desktop/caffe-caffe-0.16/models/bvlc_googlenet/logs_iter_60000.caffemodel')
(But when I do as suggested it doesn't work)
I have done testing many times before using Caffe, I don't know why this is not working.
Upvotes: 0
Views: 918
Reputation: 820
If anyone has been wondering, it turns out I have trained the model with a different version of caffe and was trying to test with another. I have two versions installed on my computer and it seems I was simply importing the older one during testing with python script (for training I had directly referenced and used the caffe tools under build) that is defined in LD_LIBRARY_PATH. The difference between versions is not too dramatic, but it seems there was a mismatch while reading prototoxt.
Upvotes: 0