omarojo
omarojo

Reputation: 1257

How to use RawDataInput in GPUImage2

Im using a media capture library called NextLevel which spits out a CMSampleBuffer on each frame. I want to take this buffer and feed it to GPUImage2 thru a rawDataInput and pass it over some filters and read it back from a rawDataOutput at the end of the chain...

CMSampleBuffer bytes -> rawDataInput -> someFilter -> someotherFilter -> rawDataOutput -> make a CVPixelBuffer for other stuff.

The problem is, how to convert a CMSampleBuffer to an array of UInt8 so that rawDataInput can take it in.

I have the following code, but its insanely slow... the frame goes all the way thru the chain and to the rawDataOuput.dataAvailableCallback but is as slow as 1 frame per second. I found this code online, no idea what it is doing mathematically, but I guess it is inefficient.

let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
    let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)

    let width = CVPixelBufferGetWidth(pixelBuffer)
    let height = CVPixelBufferGetHeight(pixelBuffer)

    let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
    let chromaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
    let lumaBuffer = lumaBaseAddress?.assumingMemoryBound(to: UInt8.self)
    let chromaBuffer = chromaBaseAddress?.assumingMemoryBound(to: UInt8.self)

    var rgbaImage = [UInt8](repeating: 0, count: 4*width*height)
    for x in 0 ..< width {
        for y in 0 ..< height {
            let lumaIndex = x+y*lumaBytesPerRow
            let chromaIndex = (y/2)*chromaBytesPerRow+(x/2)*2
            let yp = lumaBuffer?[lumaIndex]
            let cb = chromaBuffer?[chromaIndex]
            let cr = chromaBuffer?[chromaIndex+1]

            let ri = Double(yp!)                                + 1.402   * (Double(cr!) - 128)
            let gi = Double(yp!) - 0.34414 * (Double(cb!) - 128) - 0.71414 * (Double(cr!) - 128)
            let bi = Double(yp!) + 1.772   * (Double(cb!) - 128)

            let r = UInt8(min(max(ri,0), 255))
            let g = UInt8(min(max(gi,0), 255))
            let b = UInt8(min(max(bi,0), 255))

            rgbaImage[(x + y * width) * 4] = b
            rgbaImage[(x + y * width) * 4 + 1] = g
            rgbaImage[(x + y * width) * 4 + 2] = r
            rgbaImage[(x + y * width) * 4 + 3] = 255
        }
    }

    self.rawInput.uploadBytes(rgbaImage, size: Size.init(width: Float(width), height: Float(height)), pixelFormat: PixelFormat.rgba)
    CVPixelBufferUnlockBaseAddress( pixelBuffer, CVPixelBufferLockFlags(rawValue: 0) );

Update 1

Im using a Camera Library called NextLevel to retrieve the camera frames (CMSampleBuffer) and feed them to the filterchain, in this case RawDataInput via an array of UInt8 bytes. Because NextLevel uses luma/chroma when possible, I commented the 5 lines in https://github.com/NextLevel/NextLevel/blob/master/Sources/NextLevel.swift#L1106 as @rythmic fishman commented. But the code above would break so I replaced it with the following.

let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0));
    let width = CVPixelBufferGetWidth(pixelBuffer)
    let height = CVPixelBufferGetHeight(pixelBuffer)
    let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)

    let int8Buffer = CVPixelBufferGetBaseAddress(pixelBuffer)?.assumingMemoryBound(to: UInt8.self)
    var rgbaImage = [UInt8](repeating: 0, count: 4*width*height)
    for i in 0 ..< (width*height*4){
        rgbaImage[i] = UInt8((int8Buffer?[i])!)
    }


    self.rawInput.uploadBytes(rgbaImage, size: Size.init(width: Float(width), height: Float(height)), pixelFormat: PixelFormat.rgba)

    CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))

This code works when NextLevel is not using luma/chroma, but the frames are still very very slow when displayed at the end of the filterchain using a GPUImage RenderView.

Update 2

So I decided to make a custom RawDataInput.swift based in the Camera.swift from GPUImage2. Because the Camera class takes frames from the native camera in CMSampleBuffer format, I thought.. well NextLevel is throwing exactly the same buffers, I can copy the implementation of GPUImage2 Camera class and remove everything that I dont need, and just leave 1 method that receives a CMSampleBuffer and process it. Turns out it works perfectly. EXCEPT... there is a lag (no dropped frames, just lag). I dont know where the bottle neck is, I was reading that processing/modifying the CMSampleBuffers coming out of the native camera and then displaying them.. can cause delays as mentioned in this question: How to keep low latency during the preview of video coming from AVFoundation?

I made a video of the lag that Im experiencing... https://www.youtube.com/watch?v=5DQRnOTi4wk

The top corner preview comes from NextLevel's 'previewLayer: AVCaptureVideoPreviewLayer' and the filtered preview is a GPUImage2 Renderview at the end of the chain.. running in an iPhone 6 at 1920px resolution and 7 filters. This lag doesnt happend with GPUImage2 Camera class.

Here is the custom RawDataInput I put together.

#if os(Linux)
#if GLES
    import COpenGLES.gles2
    #else
    import COpenGL
#endif
#else
#if GLES
    import OpenGLES
    #else
    import OpenGL.GL3
#endif
#endif

import AVFoundation

public enum PixelFormat {
    case bgra
    case rgba
    case rgb
    case luminance

    func toGL() -> Int32 {
        switch self {
            case .bgra: return GL_BGRA
            case .rgba: return GL_RGBA
            case .rgb: return GL_RGB
            case .luminance: return GL_LUMINANCE
        }
    }
}

// TODO: Replace with texture caches where appropriate
public class RawDataInput: ImageSource {
    public let targets = TargetContainer()

    let frameRenderingSemaphore = DispatchSemaphore(value:1)
    let cameraProcessingQueue = DispatchQueue.global(priority:DispatchQueue.GlobalQueuePriority.default)
    let captureAsYUV:Bool = true
    let yuvConversionShader:ShaderProgram?
    var supportsFullYUVRange:Bool = false

    public init() {
        if captureAsYUV {
            supportsFullYUVRange = false
            let videoOutput = AVCaptureVideoDataOutput()
            let supportedPixelFormats = videoOutput.availableVideoCVPixelFormatTypes
            for currentPixelFormat in supportedPixelFormats! {
                if ((currentPixelFormat as! NSNumber).int32Value == Int32(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)) {
                    supportsFullYUVRange = true
                }
            }

            if (supportsFullYUVRange) {
                yuvConversionShader = crashOnShaderCompileFailure("Camera"){try sharedImageProcessingContext.programForVertexShader(defaultVertexShaderForInputs(2), fragmentShader:YUVConversionFullRangeFragmentShader)}
            } else {
                yuvConversionShader = crashOnShaderCompileFailure("Camera"){try sharedImageProcessingContext.programForVertexShader(defaultVertexShaderForInputs(2), fragmentShader:YUVConversionVideoRangeFragmentShader)}
            }
        } else {
            yuvConversionShader = nil
        }

    }

    public func uploadPixelBuffer(_ cameraFrame: CVPixelBuffer ) {
        guard (frameRenderingSemaphore.wait(timeout:DispatchTime.now()) == DispatchTimeoutResult.success) else { return }

        let bufferWidth = CVPixelBufferGetWidth(cameraFrame)
        let bufferHeight = CVPixelBufferGetHeight(cameraFrame)

        CVPixelBufferLockBaseAddress(cameraFrame, CVPixelBufferLockFlags(rawValue:CVOptionFlags(0)))

        sharedImageProcessingContext.runOperationAsynchronously{
            let cameraFramebuffer:Framebuffer
            let luminanceFramebuffer:Framebuffer
            let chrominanceFramebuffer:Framebuffer
            if sharedImageProcessingContext.supportsTextureCaches() {
                var luminanceTextureRef:CVOpenGLESTexture? = nil
                let _ = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, sharedImageProcessingContext.coreVideoTextureCache, cameraFrame, nil, GLenum(GL_TEXTURE_2D), GL_LUMINANCE, GLsizei(bufferWidth), GLsizei(bufferHeight), GLenum(GL_LUMINANCE), GLenum(GL_UNSIGNED_BYTE), 0, &luminanceTextureRef)
                let luminanceTexture = CVOpenGLESTextureGetName(luminanceTextureRef!)
                glActiveTexture(GLenum(GL_TEXTURE4))
                glBindTexture(GLenum(GL_TEXTURE_2D), luminanceTexture)
                glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE)
                glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE)
                luminanceFramebuffer = try! Framebuffer(context:sharedImageProcessingContext, orientation:.portrait, size:GLSize(width:GLint(bufferWidth), height:GLint(bufferHeight)), textureOnly:true, overriddenTexture:luminanceTexture)

                var chrominanceTextureRef:CVOpenGLESTexture? = nil
                let _ = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, sharedImageProcessingContext.coreVideoTextureCache, cameraFrame, nil, GLenum(GL_TEXTURE_2D), GL_LUMINANCE_ALPHA, GLsizei(bufferWidth / 2), GLsizei(bufferHeight / 2), GLenum(GL_LUMINANCE_ALPHA), GLenum(GL_UNSIGNED_BYTE), 1, &chrominanceTextureRef)
                let chrominanceTexture = CVOpenGLESTextureGetName(chrominanceTextureRef!)
                glActiveTexture(GLenum(GL_TEXTURE5))
                glBindTexture(GLenum(GL_TEXTURE_2D), chrominanceTexture)
                glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE)
                glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE)
                chrominanceFramebuffer = try! Framebuffer(context:sharedImageProcessingContext, orientation:.portrait, size:GLSize(width:GLint(bufferWidth / 2), height:GLint(bufferHeight / 2)), textureOnly:true, overriddenTexture:chrominanceTexture)
            } else {
                glActiveTexture(GLenum(GL_TEXTURE4))
                luminanceFramebuffer = sharedImageProcessingContext.framebufferCache.requestFramebufferWithProperties(orientation:.portrait, size:GLSize(width:GLint(bufferWidth), height:GLint(bufferHeight)), textureOnly:true)
                luminanceFramebuffer.lock()

                glBindTexture(GLenum(GL_TEXTURE_2D), luminanceFramebuffer.texture)
                glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_LUMINANCE, GLsizei(bufferWidth), GLsizei(bufferHeight), 0, GLenum(GL_LUMINANCE), GLenum(GL_UNSIGNED_BYTE), CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0))

                glActiveTexture(GLenum(GL_TEXTURE5))
                chrominanceFramebuffer = sharedImageProcessingContext.framebufferCache.requestFramebufferWithProperties(orientation:.portrait, size:GLSize(width:GLint(bufferWidth / 2), height:GLint(bufferHeight / 2)), textureOnly:true)
                chrominanceFramebuffer.lock()
                glBindTexture(GLenum(GL_TEXTURE_2D), chrominanceFramebuffer.texture)
                glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_LUMINANCE_ALPHA, GLsizei(bufferWidth / 2), GLsizei(bufferHeight / 2), 0, GLenum(GL_LUMINANCE_ALPHA), GLenum(GL_UNSIGNED_BYTE), CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1))
            }

            cameraFramebuffer = sharedImageProcessingContext.framebufferCache.requestFramebufferWithProperties(orientation:.portrait, size:luminanceFramebuffer.sizeForTargetOrientation(.portrait), textureOnly:false)

            let conversionMatrix:Matrix3x3
            if (self.supportsFullYUVRange) {
                conversionMatrix = colorConversionMatrix601FullRangeDefault
            } else {
                conversionMatrix = colorConversionMatrix601Default
            }
            convertYUVToRGB(shader:self.yuvConversionShader!, luminanceFramebuffer:luminanceFramebuffer, chrominanceFramebuffer:chrominanceFramebuffer, resultFramebuffer:cameraFramebuffer, colorConversionMatrix:conversionMatrix)


            //ONLY RGBA
            //let cameraFramebuffer:Framebuffer = sharedImageProcessingContext.framebufferCache.requestFramebufferWithProperties(orientation:.portrait, size:GLSize(width:GLint(bufferWidth), height:GLint(bufferHeight)), textureOnly:true)
            //glBindTexture(GLenum(GL_TEXTURE_2D), cameraFramebuffer.texture)
            //glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(bufferWidth), GLsizei(bufferHeight), 0, GLenum(GL_BGRA), GLenum(GL_UNSIGNED_BYTE), CVPixelBufferGetBaseAddress(cameraFrame))

            CVPixelBufferUnlockBaseAddress(cameraFrame, CVPixelBufferLockFlags(rawValue:CVOptionFlags(0)))


            self.updateTargetsWithFramebuffer(cameraFramebuffer)
            self.frameRenderingSemaphore.signal()

        }
    }

    public func uploadBytes(_ bytes:[UInt8], size:Size, pixelFormat:PixelFormat, orientation:ImageOrientation = .portrait) {
        let dataFramebuffer = sharedImageProcessingContext.framebufferCache.requestFramebufferWithProperties(orientation:orientation, size:GLSize(size), textureOnly:true, internalFormat:pixelFormat.toGL(), format:pixelFormat.toGL())

        glActiveTexture(GLenum(GL_TEXTURE1))
        glBindTexture(GLenum(GL_TEXTURE_2D), dataFramebuffer.texture)
        glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, size.glWidth(), size.glHeight(), 0, GLenum(pixelFormat.toGL()), GLenum(GL_UNSIGNED_BYTE), bytes)

        updateTargetsWithFramebuffer(dataFramebuffer)
    }

    public func transmitPreviousImage(to target:ImageConsumer, atIndex:UInt) {
        // TODO: Determine if this is necessary for the raw data uploads
//        if let buff = self.dataFramebuffer {
//            buff.lock()
//            target.newFramebufferAvailable(buff, fromSourceIndex:atIndex)
//        }
    }
}

I just don't understand why is that lag, if it's no different from GPUImage2 Camera class. NextLevel is not doing any other processing over those frames, it is just passing them over, so why the delay ?

Upvotes: 5

Views: 820

Answers (1)

Hashim Khan
Hashim Khan

Reputation: 440

I was facing same issue and have spent too much time to resolve it. Finally found the solution. Video frame lag issue is related to Video Stabilisation. Just use this line:

NextLevel.shared.videoStabilizationMode = .off

It's default value is .auto and that's why issue is there.

Upvotes: 1

Related Questions