Reputation: 5234
I am recording filtered video through an iPhone camera, and there is a huge increase in CPU usage when converting a CIImage to a UIImage in real time while recording. My buffer function to make a CVPixelBuffer uses a UIImage, which so far requires me to make this conversion. I'd like to instead make a buffer function that takes a CIImage if possible so I can skip the conversion from UIImage to CIImage. I'm thinking this will give me a huge boost in performance when recording video, since there won't be any hand off between CPU and GPU.
This is what I have right now. Within my captureOutput function, I create a UIImage from the CIImage, which is the filtered image. I create a CVPixelBuffer from the buffer function using the UIImage, and append it to the assetWriter's pixelBufferInput:
let imageUI = UIImage(ciImage: ciImage)
let filteredBuffer:CVPixelBuffer? = buffer(from: imageUI)
let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
My buffer function that uses a UIImage:
func buffer(from image: UIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let videoRecContext = CGContext(data: pixelData,
width: Int(image.size.width),
height: Int(image.size.height),
bitsPerComponent: 8,
bytesPerRow: videoRecBytesPerRow,
space: (MTLCaptureView?.colorSpace)!, // It's getting the current colorspace from a MTKView
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
videoRecContext?.translateBy(x: 0, y: image.size.height)
videoRecContext?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(videoRecContext!)
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
Upvotes: 8
Views: 13255
Reputation: 9075
rob mayoff answer sums it up, but there's a VERY-VERY-VERY important thing to keep in mind:
Core Image defers the rendering until the client requests the access to the frame buffer, i.e.
CVPixelBufferLockBaseAddress
.
I learned this from speaking with the Apple's technical support engineer and couldn't find this in any of the docs. I used this only with macOS, but imagine it wouldn't be different on iOS.
Keep in mind, if you lock buffer before the rendering it will still work but will run one frame behind and the first render will be empty.
Finally, it's mentioned more than once on SO and even in this thread: avoid creating new CVPixelBuffer
for each render because each buffer takes up a ton of system resources. This is why we have CVPixelBufferPool
– Apple uses it in their frameworks, so can you to achieve even better performance! ✌️
Upvotes: 6
Reputation: 5234
To extend the answer I got from rob mayoff, I'll show what I changed below:
Within the captureOutput function, I changed my code to:
let filteredBuffer : CVPixelBuffer? = buffer(from: ciImage)
filterContext?.render(_:ciImage, to:filteredBuffer!)
let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
Notice the buffer function passes a ciImage. I formatted my buffer function to pass the CIImage, and was able to get rid a lot of what was inside:
func buffer(from image: CIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
return pixelBuffer
}
Upvotes: 1
Reputation: 385500
Create a CIContext
and use it to render the CIImage
directly to your CVPixelBuffer
using CIContext.render(_: CIImage, to buffer: CVPixelBuffer)
.
Upvotes: 7