iHorse
iHorse

Reputation: 595

How to convert a CVImageBufferRef to UIImage

I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer: callback to trigger and it gives me a sample buffer that i then convert to a CVImageBufferRef. i then attempt to convert that image to a UIImage that i can then view in my app.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    /*We release some components*/
    CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace);

     /*We display the result on the custom layer*/
    /*self.customLayer.contents = (id) newImage;*/

    /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
    self.capturedView.image = image;

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);
}

the code seems to work fine up until the call to CGBitmapContextCreate. it always returns a NULL pointer. so consequently none of the rest of the function works. no matter what i seem to pass it the function returns null. i have no idea why.

Upvotes: 25

Views: 31601

Answers (4)

abhimuralidharan
abhimuralidharan

Reputation: 5939

You can directly call:

self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];

Upvotes: 5

Olivia Stork
Olivia Stork

Reputation: 4809

If you need to convert a CVImageBufferRef to UIImage, it seems to be much more difficult than it should be unfortunately.

Essentially you need to first convert it to CIImage, then CGImage, and then finally UIImage. I wish I could tell you why. :)

-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext
                                 createCGImage:ciImage
                                 fromRect:CGRectMake(0, 0,
                                 CVPixelBufferGetWidth(imageBuffer),
                                 CVPixelBufferGetHeight(imageBuffer))];

    UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
    [self doSomethingWithOurUIImage:image];
    CGImageRelease(videoImage);
}

This particular method worked for me when I was converting H.264 video using the VTDecompressionSession callback to get the CVImageBufferRef (but it should work for any CVImageBufferRef). I was using iOS 8.1, XCode 6.2.

Upvotes: 21

maaalex
maaalex

Reputation: 346

Benjamin Loulier wrote a really good post on outputting a CVImageBufferRef under the consideration of speed with multiple approaches.

You can also find a working example on github ;)

How about back in time? ;) Here you go: http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

Upvotes: 2

Chris Zelenak
Chris Zelenak

Reputation: 2178

The way that you are passing on the baseAddress presumes that the image data is in the form

ACCC

( where C is some color component, R || G || B ).

If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.

Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.

Upvotes: 20

Related Questions