Marcotmp
Marcotmp

Reputation: 71

Adobe Native Extension iOS h.264 file encoder

I'm trying to make an adobe native extension h.264 file encoder for iOS. I have the encoder part working. It run fine from an xcode test project. The problem is that when i try to run it from the ane file it doesn't work.

My code to add frames converted from a bitmapData into a CGImage:

    //convert first argument in a bitmapData
    FREObject objectBitmapData = argv[0];
    FREBitmapData  bitmapData;

    FREAcquireBitmapData( objectBitmapData, &bitmapData );

    CGImageRef theImage = getCGImageRefFromBitmapData(bitmapData);

    [videoRecorder addFrame:theImage];

In this case the CGImageRef has data, but when i try to open the video, it only show a black screen.

When i test it from an xcode project it also save a black screen video, but if i create the CGImage from a UIImage file, and then modify this CGImage and pass it to the addFrame, it work fine.

My guess is that the CGImageRef theImage is not created right.

The code i'm using to create the CGImageRef is this: https://stackoverflow.com/a/8528969/800836

Why the CGImage is not working fine when it is create using the CGImageCreate?

Thanks!

Upvotes: 2

Views: 678

Answers (1)

Marcotmp
Marcotmp

Reputation: 71

In case someone has the same problem, my solution was to create a CGImageRef with 0 bytes per row:

CGBitmapInfo bitmapInfo     = kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 1024, 768, 8, /*bytes per row*/0,    colorSpace, bitmapInfo);
// create image from context
CGImageRef tmpImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

Then copy the pixel data to this tmpImage and then create a new one based on this image:

CGImageRef getCGImageRefFromRawData(FREBitmapData bitmapData) {
    CGImageRef abgrImageRef = tmpImage;
    CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(abgrImageRef));
    UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(abgrData);
    int length = CFDataGetLength(abgrData);

    uint32_t* input = bitmapData.bits32;
    int index2 = 0;
    for (int index = 0; index < length; index+= 4) {
            pixelData[index] = (input[index2]>>0) & 0xFF;
            pixelData[index+1] = (input[index2]>>8) & 0xFF;
            pixelData[index+2] = (input[index2]>>16) & 0xFF;
            pixelData[index+3] = (input[index2]>>24) & 0xFF;
            index2++;
    }


    // grab the bgra image info
    size_t width = CGImageGetWidth(abgrImageRef);
    size_t height = CGImageGetHeight(abgrImageRef);
    size_t bitsPerComponent = CGImageGetBitsPerComponent(abgrImageRef);
    size_t bitsPerPixel = CGImageGetBitsPerPixel(abgrImageRef);
    size_t bytesPerRow = CGImageGetBytesPerRow(abgrImageRef);
    CGColorSpaceRef colorspace = CGImageGetColorSpace(abgrImageRef);
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(abgrImageRef);

    // create the argb image
    CFDataRef argbData = CFDataCreate(NULL, pixelData, length);
    CGDataProviderRef provider = CGDataProviderCreateWithCFData(argbData);
    CGImageRef argbImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

    // release what we can
    CFRelease(abgrData);
    CFRelease(argbData);
    CGDataProviderRelease(provider);
return argbImageRef;
}

Upvotes: 1

Related Questions