Reputation: 8176
I played around with AVFoundation
try to apply filter to live video. I tried to apply filter to AVCaptureVideoDataOutput
, but the output occupied only 1/4 of the view.
Here are some of my related code
Capturing
let availableCameraDevices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in availableCameraDevices as! [AVCaptureDevice] {
if device.position == .Back {
backCameraDevice = device
} else if device.position == .Front {
frontCameraDevice = device
}
}
Configure output
private func configureVideoOutput() {
videoOutput = AVCaptureVideoDataOutput()
videoOutput?.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
}
}
Get image
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer:
CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
// Grab the pixelbuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// create a CIImage from it, rotate it, and zero the origin
var image = CIImage(CVPixelBuffer: pixelBuffer)
image = image.imageByApplyingTransform(CGAffineTransformMakeRotation(CGFloat(-M_PI_2)))
let origin = image.extent.origin
image = image.imageByApplyingTransform(CGAffineTransformMakeTranslation(-origin.x, -origin.y))
self.manualDelegate?.cameraController(self, didOutputImage: image)
}
Render
func cameraController(cameraController: CameraController, didOutputImage image: CIImage) {
if glContext != EAGLContext.currentContext() {
EAGLContext.setCurrentContext(glContext)
}
let filteredImage = image.imageByApplyingFilter("CIColorControls", withInputParameters: [kCIInputSaturationKey: 0.0])
var rect = view.bounds
glView.bindDrawable()
ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent)
glView.display()
}
I expected retina display and scale factor causing this, but don't sure where should I deal with this. I already set content scale factor to GLKView, but no luck.
private var glView: GLKView {
// Set in storyboard
return view as! GLKView
}
glView.contentScaleFactor = glView.bounds.size.width / UIScreen.mainScreen().bounds.size.width * UIScreen.mainScreen().scale
Upvotes: 1
Views: 199
Reputation: 21
Your problem is with the output rect
used in the drawImage
function:
ciContext.drawImage(filteredImage, inRect: rect, fromRect: image.extent)
The image's extent is in actual pixels, while the view's bounds are points, and are not adjusted by the contentScaleFactor to get pixels. Your device undoubtedly has a contentScaleFactor of 2.0
, thus it's 1/2 the size in each dimension.
Instead, set the rect as:
var rect = CGRect(x: 0, y: 0, width: glView.drawableWidth,
height: glView.drawableHeight)
drawableWidth
and drawableHeight
return the dimension in pixels, accounting for the contentScaleFactor. See:
https://developer.apple.com/reference/glkit/glkview/1615591-drawablewidth
Also, there is no need to set glView's contentScaleFactor
Upvotes: 2
Reputation: 26385
Which format are you setting before starting the capture? Are you sure that the video preview layer is filling the whole screen? You have 2 ways to set resolution during an avcapture:
AVCaptureDeviceFormat
with the highest resolution, by looking through available capture formatsessionPreset
property of you capture session. Doc here.Upvotes: 0