Reputation: 1711
I've been following the Caffe MINST example and trying to deploy a test of the trained model with C++ where I use OpenCV to read in the images. In the example, they mention how for the training and test images they
scale the incoming pixels so that they are in the range [0,1). Why 0.00390625? It is 1 divided by 256.
I've heard how there's a DataTransformer class in Caffe you can use to scale your images, but if I multiplied each pixel in the OpenCV Mat object by 0.00390625 would this give the same result?
Upvotes: 1
Views: 298
Reputation: 50667
The idea is right. But remember to convert your OpenCV Mats to float or double type before scaling.
Something like:
cv::Mat mat; // assume this is one of your images (grayscale)
/* convert it to float */
mat.convertTo(mat, CV_32FC1); // use CV_32FC3 for color images
/* scaling here */
mat = mat * 0.00390625;
Update #1: Converting and scaling can also simply be done in one line, i.e.
cv::Mat mat; // assume this is one of your images (grayscale)
/* convert and scale here */
mat.convertTo(mat, CV_32FC1, 0.00390625);
Upvotes: 3