Reputation: 151
I'm trying to adopt the code from here:
https://github.com/foundry/OpenCVStitch
into my program. However, I've run up against a wall. This code stitches images together that are already existing. The program I'm trying to make will stitch images together that the user took. The error I'm getting is that when I pass the images to the stitch function, it is saying they are of invalid size (0 x 0).
Here is the stitching function:
- (IBAction)stitchImages:(UIButton *)sender {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSArray* imageArray = [NSArray arrayWithObjects:
chosenImage, chosenImage2, nil];
UIImage* stitchedImage = [CVWrapper processWithArray:imageArray]; // error occurring within processWithArray function
dispatch_async(dispatch_get_main_queue(), ^{
NSLog (@"stitchedImage %@",stitchedImage);
UIImageView *imageView = [[UIImageView alloc] initWithImage:stitchedImage];
self.imageView = imageView;
[self.scrollView addSubview:imageView];
self.scrollView.backgroundColor = [UIColor blackColor];
self.scrollView.contentSize = self.imageView.bounds.size;
self.scrollView.maximumZoomScale = 4.0;
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.contentOffset = CGPointMake(-(self.scrollView.bounds.size.width-self.imageView.bounds.size.width)/2, -(self.scrollView.bounds.size.height-self.imageView.bounds.size.height)/2);
[self.spinner stopAnimating];
});
});
}
chosenImage and chosenImage2 are images the user has taken using these two functions:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
savedImage = info[UIImagePickerControllerOriginalImage];
// display photo in the correct UIImageView
switch(image_location){
case 1:
chosenImage = info[UIImagePickerControllerOriginalImage];
self.imageView2.image = chosenImage;
image_location++;
break;
case 2:
chosenImage2 = info[UIImagePickerControllerOriginalImage];
self.imageView3.image = chosenImage2;
image_location--;
break;
}
// if user clicked "take photo", it should save photo
// if user clicked "select photo", it should not save photo
/*if (should_save){
UIImageWriteToSavedPhotosAlbum(chosenImage, nil, nil, nil);
}*/
[picker dismissViewControllerAnimated:YES completion:NULL];
}
- (IBAction)takePhoto:(UIButton *)sender {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = NO;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
//last_pressed = 1;
should_save = 1;
[self presentViewController:picker animated:YES completion:NULL];
}
The stitchesImages function passes an array of these two images to this function:
+ (UIImage*) processWithArray:(NSArray*)imageArray
{
if ([imageArray count]==0){
NSLog (@"imageArray is empty");
return 0;
}
cv::vector<cv::Mat> matImages;
for (id image in imageArray) {
if ([image isKindOfClass: [UIImage class]]) {
cv::Mat matImage = [image CVMat3];
NSLog (@"matImage: %@",image);
matImages.push_back(matImage);
}
}
NSLog (@"stitching...");
cv::Mat stitchedMat = stitch (matImages); // error occurring within stitch function
UIImage* result = [UIImage imageWithCVMat:stitchedMat];
return result;
}
This is where the program is running into a problem. When it is passed images that are saved locally in the application file, it works fine. However, when it is passed images that are saved in variables (chosenImage and chosenImage2), it doesn't work.
Here is the stitch function that is being called in the processWithArray function and is causing the error:
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
The error is "Can't stitch images, error code = 1".
Upvotes: 2
Views: 1802
Reputation: 31745
You are hitting memory limits. The four demo images included are 720 x 960 px, whereas you are using the full resolution image from the device camera.
Here is an Allocations trace in instruments leading up to the crash, stitching two images from the camera...
The point of this github sample is to illustrate a few things...
(1) how to integrate openCV with iOS;
(2) how to separate Objective-C and C++ code using a wrapper;
(3) how to implement the most basic stitching function in openCV.
It is best regarded as a 'hello world' project for iOS+openCV, and was not designed to work robustly with camera images. If you want to use my code as-is, I would suggest first reducing your camera images to a manageable size (e.g. max 1000 on the long side).
In any case the openCV framework you are using is as old as the project. Thanks to your question, I have just updated it (now arm64-friendly), although the memory limitations still apply.
V2, OpenCVSwiftStitch may be a more interesting starting-point for your experiments - the interface is written in Swift, and it uses cocoaPods to keep up with openCV versions (albeit currently fixed to 2.4.9.1 as 2.4.10 breaks everything). So it still illustrates the three points, and also shows how to use Swift with C++ using an Objective-C wrapper as an intermediary.
I may be able to improve memory handling (by passing around pointers). If so I will push an update to both v1 and v2. If you can make any improvements, please send a pull request.
update i've had another look and I am fairly sure it won't be possible to improve the memory handling without getting deeper into the openCV stitching algorithms. The images are already allocated on the heap so there are no improvements to be made there. I expect the best bet would be to tile and cache the intermediate images which it seems openCV is creating as part of the process. I will post an update if I get any further with this. Meanwhile, resizing the camera images is the way to go.
update 2
Some while later, I found the underlying cause of the issue. When you use images from the iOS camera as your inputs, if those images are in portrait orientation they will have the incorrect input dimensions (and orientation) for openCV. This is because all iOS camera photos are taken natively as 'landscape left'. The pixel dimensions are landscape, with the home button on the right. To display portrait, the 'imageOrientation' flag is set to UIImageOrientationRight. This is only an indication to the OS to rotate the image 90 degrees to the right for display.
The image is stored unrotated, landscape left. The incorrect pixel orientation leads to higher memory requirements and unpredictable/broken results in openCV.
I have fixed this in the latest version of openCVSwiftStitch: when necessary images are rotated pixelwise before adding to the openCV pipeline.
Upvotes: 6