Reputation: 10387
I've a local video that I want to pass as a texture to an openGL shader. I'm aware of a number of posts covering related topics, some being old or weird, and some I could not get to work.
It sounds that the way to go is:
CVPixelBuffer
CVOpenGLESTextureCacheCreateTextureFromImage
vs glTexImage2D
etc. If there's no specific reason to use yuv, I'd rather stick to rgb.My code is able to render UIImages
but I could not adapt it to video.
It seems that CVOpenGLESTextureCacheCreateTextureFromImage
is now recommended over glTexImage2D
to pass the video frame to the openGL program. Some convert the video output buffer as an image and then pass it down the pipeline, but this sounds inefficient.
As a start, here is how I get the video pixel buffer that I pass to the view managing the GL program (you can probably skip this as I think it works ok):
import UIKit
import AVFoundation
class ViewController: UIViewController {
// video things
var videoOutput: AVPlayerItemVideoOutput!
var player: AVPlayer!
var playerItem: AVPlayerItem!
var isVideoReady = false
override func viewDidLoad() {
super.viewDidLoad()
self.setupVideo()
}
func setupVideo() -> Void {
let url = Bundle.main.urlForResource("myVideoName", withExtension: "mp4")!
let outputSettings: [String: AnyObject] = ["kCVPixelBufferPixelFormatTypeKey": Int(kCVPixelFormatType_32BGRA)]
self.videoOutput = AVPlayerItemVideoOutput.init(pixelBufferAttributes: outputSettings)
self.player = AVPlayer()
let asset = AVURLAsset(url: url)
asset.loadValuesAsynchronously(forKeys: ["playable"]) {
var error: NSError? = nil
let status = asset.statusOfValue(forKey: "playable", error: &error)
switch status {
case .loaded:
self.playerItem = AVPlayerItem(asset: asset)
self.playerItem.add(self.videoOutput)
self.player.replaceCurrentItem(with: self.playerItem)
self.isVideoReady = true
case .failed:
print("failed")
case .cancelled:
print("cancelled")
default:
print("default")
}
}
}
// this function is called just before that the openGL program renders
// and can be used to update the texture. (all the GL program is already initialized at this point)
func onGlRefresh(glView: OpenGLView) -> Void {
if self.isVideoReady {
let pixelBuffer = self.videoOutput.copyPixelBuffer(forItemTime: self.playerItem.currentTime(), itemTimeForDisplay: nil)
glView.pixelBuffer = pixelBuffer
}
}
}
This seems to work fine, even though I'm not able to really test it :)
So now I've an CVPixelBuffer
available (as soon as the video is loaded)
How can I pass it to a GL program?
This code works for an CGImage?
// textureSource is an CGImage?
guard let textureSource = textureSource else { return }
let width: Int = textureSource.width
let height: Int = textureSource.height
let spriteData = UnsafeMutablePointer<GLubyte>(calloc(Int(UInt(CGFloat(width) * CGFloat(height) * 4)), sizeof(GLubyte.self)))
let colorSpace = textureSource.colorSpace!
let spriteContext: CGContext = CGContext(data: spriteData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width*4, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
spriteContext.draw(in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height)), image: textureSource)
glBindTexture(GLenum(GL_TEXTURE_2D), _textureId!)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), UInt32(GL_UNSIGNED_BYTE), spriteData)
free(spriteData)
but I could not get my head around how to adapt it efficiently to a CVPixelBuffer
I'm happy to share more code if needed, but I thought this post was already long enough :)
========== EDIT ==========
I've looked at a bunch of repos (that all copy from Apple's CameraRipple and Ray Wenderlich's tutorial) and here is the github repo of what I've so far (I'll keep it alive to preserve the link) It's not ideal but I don't want to paste too much code here. I've been able to get some video texturing to work but:
The simulator issues looks like they might be related to XCode 8 being in beta, but I'm not sure about that...
Upvotes: 3
Views: 3882
Reputation: 11
About the color, you missed specifying the textures for uniforms by calling glUniform1i() at the end of each section in refreshTextures();
func refreshTextures() -> Void {
guard let pixelBuffer = pixelBuffer else { return }
let textureWidth: GLsizei = GLsizei(CVPixelBufferGetWidth(pixelBuffer))
let textureHeight: GLsizei = GLsizei(CVPixelBufferGetHeight(pixelBuffer))
guard let videoTextureCache = videoTextureCache else { return }
self.cleanUpTextures()
// Y plane
glActiveTexture(GLenum(GL_TEXTURE0))
var err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, nil, GLenum(GL_TEXTURE_2D), GL_RED_EXT, textureWidth, textureHeight, GLenum(GL_RED_EXT), GLenum(GL_UNSIGNED_BYTE), 0, &lumaTexture)
if err != kCVReturnSuccess {
print("Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err)
return
}
guard let lumaTexture = lumaTexture else { return }
glBindTexture(CVOpenGLESTextureGetTarget(lumaTexture), CVOpenGLESTextureGetName(lumaTexture))
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE))
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE))
glUniform1i(_locations.uniforms.textureSamplerY, 0)
// UV plane
glActiveTexture(GLenum(GL_TEXTURE1))
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, nil, GLenum(GL_TEXTURE_2D), GL_RG_EXT, textureWidth/2, textureHeight/2, GLenum(GL_RG_EXT), GLenum(GL_UNSIGNED_BYTE), 1, &chromaTexture)
if err != kCVReturnSuccess {
print("Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err)
return
}
guard let chromaTexture = chromaTexture else { return }
glBindTexture(CVOpenGLESTextureGetTarget(chromaTexture), CVOpenGLESTextureGetName(chromaTexture))
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE))
glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE))
glUniform1i(_locations.uniforms.textureSamplerUV, 1)
}
Here, the type of uniforms are also corrected as
private struct Uniforms {
var textureSamplerY = GLint()
var textureSamplerUV = GLint()
}
It seems now we get the correct color.
Upvotes: 0
Reputation: 11184
Some time ago i'm facing with same problem and a good point to start is sample provided by Apple (CameraRipple)
What u actually need :
CVPixelBufferRef
(according to your post - already done). This should be repeatedly received for openGL program to display real-time videoexample:
varying lowp vec2 v_texCoord;
precision mediump float;
uniform sampler2D SamplerUV;
uniform sampler2D SamplerY;
uniform mat3 colorConversionMatrix;
void main()
{
mediump vec3 yuv;
lowp vec3 rgb;
// Subtract constants to map the video range start at 0
yuv.x = (texture2D(SamplerY, v_texCoord).r - (16.0/255.0));
yuv.yz = (texture2D(SamplerUV, v_texCoord).ra - vec2(0.5, 0.5));
rgb = yuv*colorConversionMatrix;
gl_FragColor = vec4(rgb,1);
}
For displaying video Apple recommend to use next colorConversation matrix (i also use it)
static const GLfloat kColorConversion709[] = {
1.1643, 0.0000, 1.2802,
1.1643, -0.2148, -0.3806,
1.1643, 2.1280, 0.0000
};
and of cause how to display buffer on openGL as texture - u can use something like
-(void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer
{
CVReturn err;
if (pixelBuffer != NULL) {
int frameWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
int frameHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
if (!_videoTextureCache) {
NSLog(@"No video texture cache");
return;
}
[self cleanUpTextures];
//Create Y and UV textures from the pixel buffer. These textures will be drawn on the frame buffer
//Y-plane.
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_LUMINANCE, frameWidth, frameHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &_lumaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane.
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, frameWidth / 2, frameHeight / 2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &_chromaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnableVertexAttribArray(_vertexBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, _vertexBufferID);
CFRelease(pixelBuffer);
glUniformMatrix3fv(uniforms[UNIFORM_COLOR_CONVERSION_MATRIX], 1, GL_FALSE, _preferredConversion);
}
}
Do not forget to clean up texture
-(void)cleanUpTextures
{
if (_lumaTexture) {
CFRelease(_lumaTexture);
_lumaTexture = NULL;
}
if (_chromaTexture) {
CFRelease(_chromaTexture);
_chromaTexture = NULL;
}
// Periodic texture cache flush every frame
CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
}
PS. not in swift but actually this should be a problem to convert obj-c to swift i guess
Upvotes: 2