Why the result of DIGITS and OpenCV 3.1 is different?

I use DIGIT to classify (I test GoogLeNet with Adaptive Gradient, Stochastic gradient descent, and Nesterov's accelerated gradient). The images are color and 256*256. After training I use "Test a single image" option and test one image. The result is show prefect match and classify image correctly. Then I use downloaded model for applying in OpenCV 3.1 (windows 64bit, visual studio 2013, Nvidia GPU) based on "http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html". However, always I got different class and wrong answer.
Edit:
I try cvtColor(img, img, COLOR_BGR2RGB) and the problem not solve. Still I got wrong result. I try different data transformations like none, image, and pixel. Also different solver type.

Upvotes: 2

Views: 414

Answers (2)

Luke Yeager
Luke Yeager

Reputation: 1430

I would be surprised if OpenCV 3 vs 2 is causing this issue. Instead, I expect that the discrepancy is due to a difference in data pre-processing.

Here's an example of how to do data pre-processing for a Caffe model that was trained in DIGITS: https://github.com/NVIDIA/DIGITS/blob/v4.0.0/examples/classification/example.py#L40-L85

Also make sure you read these "gotchas": https://github.com/NVIDIA/DIGITS/blob/v4.0.0/examples/classification/README.md#limitations

Upvotes: 2

Framester
Framester

Reputation: 35521

OpenCV uses by default the now very uncommon BGR (blue, green, red) ordering of the color channels. Normal is RGB.

Why OpenCV Using BGR Colour Space Instead of RGB

This could explain the bad performance of the model.

Upvotes: 1

Related Questions