Reputation: 181
I am trying to write a code that applies SURF object detection, so I took one of the openCV samples ( sample 3 ) and I started updating the onCameraViewStarted()
and onCameraFrame()
methods but I keep getting a runtime error when I try it on my galaxy S3 phone and I couldn't find anything to help with my problem here is my code and what I updated:
public class Sample3Native extends Activity implements CvCameraViewListener{
private static final String TAG = "OCVSample::Activity";
private Mat mRgba;
private Mat mGrayMat;
private CameraBridgeViewBase mOpenCvCameraView;
Mat descriptors ;
List<Mat> descriptorsList;
FeatureDetector featureDetector;
MatOfKeyPoint keyPoints;
DescriptorExtractor descriptorExtractor;
DescriptorMatcher descriptorMatcher;**
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status) {
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i(TAG, "OpenCV loaded successfully");
// Load native library after(!) OpenCV initialization
System.loadLibrary("native_sample");
mOpenCvCameraView.enableView();
} break;
default:
{
super.onManagerConnected(status);
} break;
}
}
};
public void onCameraViewStarted(int width, int height) {
mRgba = new Mat(height, width, CvType.CV_8UC4);
mGrayMat = new Mat(height, width, CvType.CV_8UC1);
featureDetector=FeatureDetector.create(4); // SURF= 4;
descriptorExtractor=DescriptorExtractor.create(2);//SURF = 2
descriptorMatcher=DescriptorMatcher.create(6); //BRUTEFORCE_SL2 = 6**
}
public Mat onCameraFrame(Mat inputFrame) {
inputFrame.copyTo(mRgba);
//detect_1(0, mRgba.getNativeObjAddr(), keyPoints.getNativeObjAddr());
//Now mRgba contains the current frame ( start manipulation part)
//detecting keypoints
featureDetector.detect(mRgba, keyPoints);
//draw keypoints
// Features2d.drawKeypoints(mRgba, keyPoints, mRgba);
//finding descriptors
descriptorExtractor.compute(mRgba, keyPoints, descriptors);
//Matcher between 2 images or set of images
// Note: training set and query set are handled here! (in matcher)
//descriptorsList = descriptorMatcher.getTrainDescriptors();
//descriptorsList.add(descriptors);
// descriptorMatcher.add(descriptorsList);
//Imgproc.cvtColor(mRgba, mGrayMat, Imgproc.COLOR_RGBA2GRAY);
//FindFeatures(mGrayMat.getNativeObjAddr(), mRgba.getNativeObjAddr());
return mRgba;
}
}
Note: I have tried commenting everything but the featureDetector.detect(mRgba, keyPoints)
in the onCameraFrame()
method and still gave runtime error on my phone.
Upvotes: 18
Views: 10803
Reputation: 1221
Are you sure that you use SIFT in a correct way? As far as I know, SIFT and SURF are not include in the distribution package of OpenCV Android. To use them, you need to compile the nonfree module and use it in your project. So, what you need to do is to create a NDK project, compile the nonfree module as a standalone library. Then use this library to compile your program. Then you should be able to build your application. You can refer to this tutorial.
After you get the jni library, you can easily wrap it to a JAVA JNI interface. Then you should be able to use the JAVA interface in your Android application.
Upvotes: 1
Reputation: 1221
To comment on cid and HMK's answer (sorry, I don't have 50 reputation for "add comment", so I have to create a new answer).
The OpenCV library can accept color image as input. The following is my SIFT detection and description extraction code. It works pretty well. It means you don't need to convert the image into gray-scale format, although the SIFT algorithm only work on gray-scale image. I believe the OpenCV detector has done some preprocessing. (Since the suft detector and sift work in a similar way, I assume the SURF does not require gray-scale format input either)
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
if(! image.data )
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
vector<KeyPoint> keypoints;
Mat descriptors;
// Create a SIFT keypoint detector.
SiftFeatureDetector detector;
detector.detect(image, keypoints);
cout << "Detected " << (int) keypoints.size() << " keypoints" <<endl;
// Compute feature description.
detector.compute(image,keypoints, descriptors);
cout << "Computed feature."<<endl;
Upvotes: 0
Reputation: 574
SURF or SIFT only support grayscale. So you have to convert it to grayscale first with the below code: cvtColor( mRgba, mRgba, CV_BGR2GRAY );
Upvotes: 0
Reputation: 716
If I'm not mistaken, OpenCV SURF Feature Detector only works with grayscale images. So try to add this after your call to copyTo in the onCameraFrame()
method:
cvtColor(mRgba, mGrayMat, COLOR_RGBA2GRAY);
Upvotes: 1