CocoaMug
CocoaMug

Reputation: 41

Fast alternative to drawInRect

What is the fastest way to render and scale an image to a surface at a high frame rate?

I am drawing (with scaling) an NSBitmapImageRep using drawRect at ~30FPS but it uses a huge amount of CPU.

Sample code to set pixels in the NSBitmapImageRep at 30FPS:

NSInteger x, y;
unsigned char *imgptr = nsBitmapImageRepObj.bitmapData;
unsigned char *ptr;
NSInteger rowBytes = nsBitmapImageRepObj.bytesPerRow;

for (y = 0; y < nsInputFrameRect.size.height; y++) {
    ptr = imgptr + (y * rowBytes);
    for (x = 0; x < nsInputFrameRect.size.width; x++) {
        *ptr++ = 1; // R
        *ptr++ = 2; // G
        *ptr++ = 3; // B
    }

}
[self setNeedsDisplay:YES];

The draw happening at 30FPS:

- (void)drawRect:(NSRect)pRect {
    [NSGraphicsContext saveGraphicsState];    
    [nsBitmapImageRepObj drawInRect:pRect];
    [NSGraphicsContext restoreGraphicsState];    
} // end drawRect

drawInRect on the NSBitmapImageRep uses a lot of CPU. What is the fastest way of scaling and painting an image at a high frame rate? The source image should be an image for which I can set the pixels directly e.g. via getting a bitmapData pointer.

If I convert the NSBitmapImageRep to a CIImage and draw using [ciImage drawInRect] then it's about 10% faster. If I draw to an NSOpenGLView instead of an NSView it's about 10% faster again, but still the scale/drawInRect takes up a lot of CPU time.

Upvotes: 4

Views: 850

Answers (1)

Heinrich Giesen
Heinrich Giesen

Reputation: 521

First of all you should remove saving and restoring the NSGraphicsContext, because the -[NSImageRep drawInRect:] method does this itself . See "Cocoa Drawing Graphics Context":

Important: Saving and restoring the current graphics state is a relatively expensive operation that should done as little as possible.

Next point is the structure of your pixel data: 8 bit per sample and 3 samples per pixel which means 24 bits per pixel. This is very hardware(GPU?) unfriendly because 32 bit values are faster to handle than 24 bit values. There is a simple way to make pixel handling nicer (faster): use 4 bytes instead of 3: add a padding byte to each pixel, which does not harm the image! This padding byte is not part of the image but only part of the storage the pixel is stored in and is not used in any drawing operation. It is simply here to make pixel operations faster. So use this in the description of your NSBitImageRep:

bitsPerSample:8
samplesPerPixel:3
hasAlpha:NO
isPlanar:NO    // must be meshed
bytesPerRow:0  // let the system do it
bitsPerPixel:32   // <-- important !!

And therefore your code has to be modified:

    *ptr++ = 3; // B
    *ptr++ = 123; // new!! value is unimportant

BTW: if you read a jpeg-image into an NSBitmapImageRep then these imageReps always have 3-byte pixels plus a padding byte. Why this waste of storage? There are good reasons to do so!

Upvotes: 1

Related Questions