Ole Bjørn Setnes
Ole Bjørn Setnes

Reputation: 788

drawRect always wants to update the entire screen when view is shrunken

I have a CGImage that is displayed in an UIView controlled by a UIScrollView. The image is typically 1680*1050 in 24 bit colors without alpha channel.

The image is created like this:

CGImageRef bitmapClass::CreateBitmap(int width, int height, int imageSize, int colorDepth, int bytesPerRow)
{
   unsigned char* m_pvBits = malloc(imageSize);

   // Initializt bitmap buffer to black (red, green, blue)
   memset(m_pvBits, 0, imageSize);

   m_DataProviderRef = 
      CGDataProviderCreateWithData(NULL, m_pvBits, imageSize, NULL); 

   m_ColorSpaceRef = 
      CGColorSpaceCreateDeviceRGB();   

   return
      CGImageCreate(width, height, 
                    8, //kBitsPerComponent 
                    colorDepth, 
                    bytesPerRow, 
                    m_ColorSpaceRef, 
                    kCGBitmapByteOrderDefault | kCGImageAlphaNone, 
                    m_DataProviderRef, 
                    NULL, false, kCGRenderingIntentDefault);
}

The image content is updated regularly in the background by changing the pixels in m_pvBits and updated in the UIView using:

[myView setNeedsDisplayInRect:rect];

That invokes drawRect that displays the image like this:

- (void)drawRect:(CGRect)rect
{
   CGRect imageRect;
   imageRect.origin = CGPointMake(0.0, 0.0);
   imageRect.size = CGSizeMake(self.imageWidth, self.imageHeight);

   CGContextRef context = UIGraphicsGetCurrentContext();

   CGContextSetInterpolationQuality(context, kCGInterpolationNone);

   // Drawing code
   CGContextDrawImage(context, imageRect, (CGImageRef)pWindow->GetImageRef());
}

This works well as long as the view is not shrunk (zoomed out). I know that 'rect' is actually not used directly in drawRect but the 'context' seems to know what part of the screen CGContextDrawImage should update.

My problem is that even though I only invalidate a small area of the screen with setNeedsDisplayInRect, drawRect is called with the entire screen when the view is shrunk. So if my image is 1680*1050 and I invalidates a small rectangle (x,y,w,h)=(512, 640, 32, 32), drawRect is called with (x,y,w,h)=(0, 0, 1680, 1050).

Why?

Upvotes: 3

Views: 1496

Answers (2)

Johannes Fahrenkrug
Johannes Fahrenkrug

Reputation: 44836

There is an interesting sample code project from Apple that might help you. It's called Large Image Downsizing. It does a bunch of things, but the really interesting part is TiledImageView.m. In it, the view's layer class is set to CATiledLayer:

+ (Cl ass)layerClass {
    return [CATiledLayer class];
}

The tiledLayer properties are being set up in the init method:

// Create a new TiledImageView with the desired frame and scale.
-(id)initWithFrame:(CGRect)_frame image:(UIImage*)_image scale:(CGFloat)_scale {
    if ((self = [super initWithFrame:_frame])) {
        self.image = _image;
        imageRect = CGRectMake(0.0f, 0.0f, CGImageGetWidth(image.CGImage), CGImageGetHeight(image.CGImage));
        imageScale = _scale;
        CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
        // levelsOfDetail and levelsOfDetailBias determine how
        // the layer is rendered at different zoom levels.  This
        // only matters while the view is zooming, since once the 
        // the view is done zooming a new TiledImageView is created
        // at the correct size and scale.
        tiledLayer.levelsOfDetail = 4;
        tiledLayer.levelsOfDetailBias = 4;
        tiledLayer.tileSize = CGSizeMake(512.0, 512.0); 
    }
    return self;
}

and finally, and this is the really interesting part, this is the drawRect: implementation (NSLog is from me):

-(void)drawRect:(CGRect)_rect {
    CGContextRef context = UIGraphicsGetCurrentContext();    
    CGContextSaveGState(context);
    // Scale the context so that the image is rendered 
    // at the correct size for the zoom level.
    CGContextScaleCTM(context, imageScale,imageScale);  
    NSLog(@"rect: %@, imageScale: %.4f, clip: %@", NSStringFromCGRect(_rect), imageScale, NSStringFromCGRect(CGContextGetClipBoundingBox(context)));
    CGContextDrawImage(context, imageRect, image.CGImage);
    CGContextRestoreGState(context);    
}

drawRect: is called for every 512x512 tile. The passed in _rect is not used at all inside the method, though. It doesn't have to, because the clipping path of the current context is set to the correct region (as you'll see by the output of CGContextGetClipBoundingBox). So like Rob said, CGContextDrawImage can be used and only the requested "tile" of the large image will be drawn, although it looks as if the whole image is drawn again and again. If you don't use CATiledLayer, the whole image indeed is drawn again and again, which slows everything down. But when you use CATiledLayer, only the currently needed "tiles" are drawn which makes it a lot faster.

I hope this helps.

Upvotes: 3

Ole Bjørn Setnes
Ole Bjørn Setnes

Reputation: 788

Thank you for the answer.

It is my observation that the content of the rect parameter that is passed to drawRect is the same as I get when I query CGContextGetClipBoundingBox().

You suggest that I should optimize my drawing to the clipping bounds - but I'm not doing any calculations here. I'm just calling CGContextDrawImage(). I can see other programmers complaining about poor performance of CGContextDrawImage and that it apparently got much slower from iPhone 3 to 4. I would of course like the performance to be better but right now the biggest problem is that iOS updating the entire image even though only a small part is invalidated (with setNeedsDisplay).

By zoomed out I mean that I'm able to see more of the image but the image is shrunk. I'm not constantly zooming in and out but a small part of the image is regularly updated and invoking setNeedsDisplay. When I zoom or when I pan it is very slow and abrupt because the entire screen is set by iOS to be updated.

You are right that CGContextDrawImage optimizes what is actually drawn - I can see that by measuring the time the operation takes.

As you can see I allocate my image bits with malloc. Is there another way that I can allocate the image bits that is better optimized for the graphics processor?

Is there a more optimal ColorRef mode or other flags for CGImageCreate?

Upvotes: 1

Related Questions