Russell Thorman
Russell Thorman

Reputation: 33

Render speed for individual pixels in loop

I'm working on drawing individual pixels to a UIView to create fractal images. My problem is my rendering speed. I am currently running this loop 260,000 times, but would like to render even more pixels. As it is, it takes about 5 seconds to run on my iPad Mini.

I was using a UIBezierPath before, but that was even a bit slower (about 7 seconds). I've been looking in NSBitMap stuff, but I'm not exactly sure if that would speed it up or how to implement it in the first place.

I was also thinking about trying to store the pixels from my loop into an array, and then draw them all together after my loop. Again though, I am not quite sure what the best process would be to store and then retrieve pixels into and from an array.

Any help on speeding up this process would be great.

for (int i = 0; i < 260000; i++) {

    float RN = drand48();

    for (int i = 1; i < numBuckets; i++) {
        if (RN < bucket[i]) {
            col = i;
            CGContextSetFillColor(context, CGColorGetComponents([UIColor colorWithRed:(colorSelector[i][0]) green:(colorSelector[i][1]) blue:(colorSelector[i][2]) alpha:(1)].CGColor));
            break;
        }
    }

    xT = myTextFieldArray[1][1][col]*x1 + myTextFieldArray[1][2][col]*y1 + myTextFieldArray[1][5][col];
    yT = myTextFieldArray[1][3][col]*x1 + myTextFieldArray[1][4][col]*y1 + myTextFieldArray[1][6][col];

    x1 = xT;
    y1 = yT;
    if (i > 10000) {
        CGContextFillRect(context, CGRectMake(xOrigin+(xT-xMin)*sizeScalar,yOrigin-(yT-yMin)*sizeScalar,.5,.5));

    }
    else if (i < 10000) {
        if (x1 < xMin) {
            xMin = x1;
        }
        else if (x1 > xMax) {
            xMax = x1;
        }
        if (y1 < yMin) {
            yMin = y1;
        }
        else if (y1 > yMax) {
            yMax = y1;
        }
    }
    else if (i == 10000) {
        if (xMax - xMin > yMax - yMin) {
            sizeScalar = 960/(xMax - xMin);
            yOrigin=1000-(1000-sizeScalar*(yMax-yMin))/2;
        }
        else {
            sizeScalar = 960/(yMax - yMin);
            xOrigin=(1000-sizeScalar*(xMax-xMin))/2;
        }
    }
}

Edit

I created a multidimensional array to store UIColors into, so I could use a bitmap to draw my image. It is significantly faster, but my colors are not working appropriately now.

Here is where I am storing my UIColors into the array:

int xPixel = xOrigin+(xT-xMin)*sizeScalar;
int yPixel = yOrigin-(yT-yMin)*sizeScalar;
pixelArray[1000-yPixel][xPixel] = customColors[col];

Here is my drawing stuff:

CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixelArray, 1000000, nil);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGImageRef image = CGImageCreate(1000,
                                 1000,
                                 8,
                                 32,
                                 4000,
                                 colorSpace,
                                 kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
                                 provider,
                                 nil, //No decode
                                 NO,  //No interpolation
                                 kCGRenderingIntentDefault); // Default rendering

CGContextDrawImage(context, self.bounds, image);

Not only are the colors not what they are supposed to be, but every time I render my image, to colors are completely different from the previous time. I have been testing different stuff with the colors, but I still have no idea why the colors are wrong, and I'm even more confused how they keep changing.

Upvotes: 3

Views: 852

Answers (3)

rickster
rickster

Reputation: 126127

Per-pixel drawing — with complicated calculations for each pixel, like fractal rendering — is one of the hardest things you can ask a computer to do. Each of the other answers here touches on one aspect of its difficulty, but that's not quite all. (Luckily, this kind of rendering is also something that modern hardware is optimized for, if you know what to ask it for. I'll get to that.)

Both @jcaron and @JustinMeiners note that vector drawing operations (even rect fill) in CoreGraphics take a penalty for CPU-based rasterization. Manipulating a buffer of bitmap data would be faster, but not a lot faster.

Getting that buffer onto the screen also takes time, especially if you're having to go through a process of creating bitmap image buffers and then drawing them in a CG context — that's doing a lot of sequential drawing work on the CPU and a lot of memory-bandwidth work to copy that buffer around. So @JustinMeiners is right that direct access to GPU texture memory would be a big help.

However, if you're still filling your buffer in CPU code, you're still hampered by two costs (at best, worse if you do it naively):

  • sequential work to render each pixel
  • memory transfer cost from texture memory to frame buffer when rendering

@JustinMeiners' answer is good for his use case — image sequences are pre-rendered, so he knows exactly what each pixel is going to be and he just has to schlep it into texture memory. But your use case requires a lot of per-pixel calculations.

Luckily, per-pixel calculations are what GPUs are designed for! Welcome to the world of pixel shaders. For each pixel on the screen, you can be running an independent calculation to determine the relationship of that point to your fractal set and thus what color to draw it in. The can be running that calculation in parallel for many pixels at once, and its output is going straight to the screen, so there's no memory overhead to dump a bitmap into the framebuffer.

One easy way to work with pixel shaders on iOS is SpriteKit — it can handle most of the necessary OpenGL/Metal setup for you, so all you have to write is the per-pixel algorithm in GLSL (actually, a subset of GLSL that gets automatically translated to Metal shader language on Metal-supported devices). Here's a good tutorial on that, and here's another on OpenGL ES pixel shaders for iOS in general.

Upvotes: 7

jcaron
jcaron

Reputation: 17710

If you really want to change many different pixels individually, your best option is probably to allocate a chunk of memory (of size width * height * bytes per pixel), make the changes directly in memory, and then convert the whole thing into a bitmap at once with CGBitmapContextCreateWithData

There may be even faster methods that this (see Justin's answer).

Upvotes: 2

Justin Meiners
Justin Meiners

Reputation: 11113

If you want to maximize render speed I would recommend bitmap rendering. Vector rasterization is much slower and CGContext drawing isn't really intended for high performance realtime rendering.

I faced a similar technical challenge and found CVOpenGLESTextureCacheRef to be the fastest. The texture cache allows you to upload a bitmap directly into graphics memory for fast rendering. Rendering utilizes OpenGL, but because its just 2D fullscreen image - you really don't need to learn much about OpenGL to use it.

You can see see an example I wrote of using the texture cache here: https://github.com/justinmeiners/image-sequence-streaming

My original question related to this is here: How to directly update pixels - with CGImage and direct CGDataProvider

My project renders bitmaps from files so it is a little bit different but you could look at ISSequenceView.m for an example of how to use the texture cache and setup OpenGL for this kind of rendering.

Your rendering procedure could like something like:

1. Draw to buffer (raw bytes)
2. Lock texture cache
3. Copy buffer to texture cache
4. Unlock texture cache.
5. Draw fullscreen quad with texture 

Upvotes: 2

Related Questions