Reputation: 830
Now I'm developing image processing app targeted to iOS4.1 or later.
I want to use CVOpenGLESTextureCache when app running on iOS5.
I create texture cache by below code
NSDictionary *empty = [NSDictionary dictionary];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:NO], kCVPixelBufferCGBitmapContextCompatibilityKey,
empty, kCVPixelBufferIOSurfacePropertiesKey,
nil];
CVPixelBufferCreate(kCFAllocatorDefault, texSize_.width, texSize_.height, kCVPixelFormatType_32BGRA, (CFDictionaryRef)options, &renderTarget);
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
filterTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)texSize_.width,
(int)texSize_.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
texID_ = CVOpenGLESTextureGetName(renderTexture);
and render some elements to created texture by standerd method (glFrameBufferTexture2D and glDrawArray).
After that, I tryed to read CVPixelBuffer and create image.
- (UIImage *)imageFromTextureCache {
if (renderTarget) {
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(renderTarget,
kCVPixelBufferLock_ReadOnly)) {
uint8_t* pixels=(uint8_t*)CVPixelBufferGetBaseAddress(renderTarget);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, CVPixelBufferGetDataSize(renderTarget), bufferFree);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(texSize_.width, texSize_.height, 8, 32, 4 * texSize_.width, colorSpaceRef, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
CGImageRef cloppedImageRef = imageRef;
if (!CGSizeEqualToSize(CGSizeMake(1.0, 1.0), contentRatio_)) {
cloppedImageRef = CGImageCreateWithImageInRect(imageRef, CGRectMake(0, 0, texSize_.width * contentRatio_.width, texSize_.height * contentRatio_.height));
}
UIImage *image = [UIImage imageWithCGImage:cloppedImageRef];
if (!CGSizeEqualToSize(CGSizeMake(1.0, 1.0), contentRatio_)) {
CGImageRelease(cloppedImageRef);
}
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CVPixelBufferUnlockBaseAddress(renderTarget, kCVPixelBufferLock_ReadOnly);
return image;
}
return nil;
}
return nil;
}
and
UIImage *image = [outputTexture_ imageFromTextureCache];
Then I got previous rendered image. not current.
But I modified code like below to got current rendered image.
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texID, 0);
GLubyte *buffer = (GLubyte *)calloc(sizeof(GLubyte), 1);
glReadPixels(0, 0, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
UIImage *image = [outputTexture_ imageFromTextureCache];
I can't understand what happens. And I couldn't find any documents about that.
Anyone helps me?
Sorry for my cheap English... Thanks.
Upvotes: 0
Views: 919
Reputation:
yes use glFlush before you access the buffer and glFinish after you access. I had same problem
http://www.khronos.org/opengles/sdk/1.1/docs/man/glFlush.xml
Upvotes: 0
Reputation: 170319
When reading directly from a texture that's associated with a texture cache, it's not guaranteed that any OpenGL ES rendering you do to that texture will have been completed by the time you read those bytes. The reason why you see the correct current frame when using glReadPixels()
is that it blocks until rendering has completed.
To guarantee that all rendering has finished before reading from the texture, you can place a glFinish()
call before reading the texture data. This will block until all rendering has completed. For video, there are some tricks you can do with staggering glFlush()
and glFinish()
calls for slightly better performance, but in your case a simple glFinish()
should do the trick.
Upvotes: 0