Robert
Robert

Reputation: 38213

How do I convert from a CVPixelBufferRef to an openCV cv::Mat

I would like to perform a few operations to a CVPixelBufferRef and come out with a cv::Mat

I am not sure what the most efficient order is to do this, however, I do know that all of the operations are available on an open:CV matrix, so I would like to know how to convert it.

- (void) captureOutput:(AVCaptureOutput *)captureOutput 
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
         fromConnection:(AVCaptureConnection *)connection
{
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     cv::Mat frame = f(pixelBuffer); // how do I implement f()?

Upvotes: 14

Views: 10268

Answers (3)

Mohamed Salah
Mohamed Salah

Reputation: 1205

For you guys who are looking for the swift code for the newly OpenCV 4.4 Swift Wrapper:

// Converts CVPixelBuffer to Mat object
/// - Parameter pixelBuffer: BGRA pixel data
/// - Returns: BGRA img data as a Mat object
func cvPixelBufferToMat(pixelBuffer: CVPixelBuffer)-> Mat? {
    
    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
    let base = CVPixelBufferGetBaseAddress(pixelBuffer)
    let width = Int32(CVPixelBufferGetWidth(pixelBuffer))
    let height = Int32(CVPixelBufferGetHeight(pixelBuffer))
    
    let matrix = Mat(rows: width, cols: height, type: CvType.CV_8UC4)
    memcpy(matrix.dataPointer(), base, CVPixelBufferGetDataSize(pixelBuffer))
    
    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
    return matrix
}



// Converts Mat object to CVPixelBuffer
/// - Parameter mat: BGRA img data
/// - Returns: CVPixelBuffer(BGRA)
func matToCVPixelBuffer(mat: Mat)-> CVPixelBuffer? {
    let matrix = mat
    
    let attributes = [
        kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue!,
        kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue!,
        kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue!,
        kCVPixelBufferWidthKey: matrix.cols(),
        kCVPixelBufferHeightKey: matrix.rows(),
        kCVPixelBufferBytesPerRowAlignmentKey: matrix.step1(0)
    ] as CFDictionary
    
    var pixelBuffer: CVPixelBuffer?
    
    let status = CVPixelBufferCreate(
        kCFAllocatorDefault, Int(matrix.cols()),
        Int(matrix.rows()),
        kCVPixelFormatType_32BGRA,
        attributes,
        &pixelBuffer)
    
    guard let pixelBuffer = pixelBuffer, (status == kCVReturnSuccess) else {
        return nil
    }
    
    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
    let base = CVPixelBufferGetBaseAddress(pixelBuffer)
    memcpy(base, matrix.dataPointer(), matrix.total()*matrix.elemSize())
    
    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
    return pixelBuffer
}

Upvotes: 0

newinH
newinH

Reputation: 101

I'm using this. My cv:Mat is configured BGR(8UC3) colorFormat.

CVImageBufferRef -> cv::Mat

- (cv::Mat) matFromImageBuffer: (CVImageBufferRef) buffer {

    cv::Mat mat ;

    CVPixelBufferLockBaseAddress(buffer, 0);

    void *address = CVPixelBufferGetBaseAddress(buffer);
    int width = (int) CVPixelBufferGetWidth(buffer);
    int height = (int) CVPixelBufferGetHeight(buffer);

    mat   = cv::Mat(height, width, CV_8UC4, address, 0);
    //cv::cvtColor(mat, _mat, CV_BGRA2BGR);

    CVPixelBufferUnlockBaseAddress(buffer, 0);

    return mat;
}

cv::Mat -> CVImageBufferRef (CVPixelBufferRef)

- (CVImageBufferRef) getImageBufferFromMat: (cv::Mat) mat {

    cv::cvtColor(mat, mat, CV_BGR2BGRA);

    int width = mat.cols;
    int height = mat.rows;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             [NSNumber numberWithInt:width], kCVPixelBufferWidthKey,
                             [NSNumber numberWithInt:height], kCVPixelBufferHeightKey,
                             nil];

    CVPixelBufferRef imageBuffer;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;


    NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);

    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *base = CVPixelBufferGetBaseAddress(imageBuffer) ;
    memcpy(base, mat.data, _mat.total()*4);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return imageBuffer;
}

Upvotes: 6

Robert
Robert

Reputation: 38213

I found the answer in some excellent GitHub source code. I adapted it here for simplicity. It also does the greyscale conversion for me.

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);

// Set the following dict on AVCaptureVideoDataOutput's videoSettings to get YUV output
// @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }

NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");

// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);

cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);

// Use the mat here

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

I am thinking that the best order will be:

  1. Convert to grayscale (since it is done almost automatically)
  2. Crop (this should be a fast operation and will reduce the number of pixels to work with)
  3. Scale down
  4. Equalize the histogram

Upvotes: 16

Related Questions