Reputation: 449
This is really weird. I'm implementing this model:
Except that I read data from a text file using an ImageData blob, batch_size: 1. There are only two labels and the text file is organized as usual
/home/.../pathToFile 0 ... /home/.../pathToFile 1
Still, Caffe only trains and tests label 0!
I run caffe using the regular tool.
./build/tools/caffe train --solver=solver.prototxt
When I open the net in pycaffe I get this message for the first time ever:
WARNING: Logging before InitGoogleLogging() is written to STDERR
and the size of the
net.blobs['label'].data
is now 1, when it should be 2!
Not only that but that label seems to be a float rather than an integer.
In: net.blobs['label'].data
Out: array([ 0.], dtype=float32)
I know that this has worked before, I just can't get my head around what I'm doing wrong or where to begin troubleshoot.
Upvotes: 2
Views: 471
Reputation: 114786
The output shape of your network depends on the input batch_size
: if you define batch_size: 1
than your net processes a single example each time, thus it only reads a single label
. If you change batch_size
to 2, caffe will read two samples and consequently the shape of label
will become 2.
One exception to this "shape rule" is the loss output: the loss defines a scalar function with respect to which gradients are computed. Thus, the loss output will always be a scalar regardless of the input shape.
Regarding the data type of label
: Caffe stores all variables in "Blobs" of type float32
.
Upvotes: 2