Luciano Caetano
Luciano Caetano

Reputation: 59

Why the transparent images affect the java performance?

I have been working with images that have transparency, in java, and one thing that I notice is that this kind of images (ridiculous) affect the performance.

ex.: If you are drawing a image with a white background you have 100fps. If you change it for the same image with a transparent bg, it downs to 30 fps.

Searching in the internet, I have found some solutions, but what I didn't found is a explanation for it. Could someone explain me why this happen?

Upvotes: 0

Views: 361

Answers (2)

maaartinus
maaartinus

Reputation: 46422

I guess the reason is very simple:

Drawing an opaque image means simply replacing an area by something else (just a bunch of memory copy operations).

Drawing a transparent image means combining the old image with the new one, which includes quite a lot of pixel-color-wise calculations.

EDIT

Each of the above operations can be sped up substantially by the use of GPU. The GPU is pretty dumb and can do only specialized operation, but it can do them in parallel for many pixels. This means that using the GPU is a huge speed advantage for the above operations.

You're complaining about why the slowdown is bigger in Java than in C#. The answer is most probably missing drivers. Without them, less efficient GPU operations must be used or the CPU must do the job.

Upvotes: 2

nanofarad
nanofarad

Reputation: 41281

I'm not sure what context images are in(swing? awt? slick2d? javaFX? lwjgl?) so I'll answer in principles instead.

Remember that Java is generally interpreted before JIT kicks in(which only occurs on some platforms), and so CPU-heavy operations are much more apparent than other languages.

Let's head to a simple case where we need to view images that are stacked on top of each, and they're conveniently one-dimensional.

Key:

  • [CAMERA] is self-explanatory
  • Spaces are transparent pixels or empty space
  • | are imaginary rays of vision from the camera
  • : are such rays after passing a partially transparent pixel.
  • Capital letters are opaque pixels.
  • Lowercase letters are partially transparent pixels.

Let's deal with the simplest case of a single opaque image:

 [ CAMERA ]
 ||||||||||
    RGBRG

In the best case the image can just be copied to the view buffer and scaled. Now, to multiple images:

    [CAMERA]
 |||||||||||||
 |||RGBRGB||||
 |||      ||||
 YYYYYYYYYYYYY

In this case the view can copy the yellow into the buffer after transforming it and then blotch out all of the area covered by the top image.

Now, for transparent:

     [CAMERA]
 |||||||||||||
 |||RG||GB||||
 |||  ||  ||||
 YYYYYYYYYYYYY

Here, you can't just copy a buffer. You have to check every pixel of the upper image, and then conditionally overdraw or not depending on transparency, leading to lower performance.

Note that this is a basic overview, this would be much more complex and optimized in real cases.

As for partial transparency(full alpha channels) this is even worse:

     [CAMERA]
 |||||||||||||
 |||RGggGB||||
 |||  ::  ||||
 YYYYYYYYYYYYY

As you can see here we still need to determine the color under the middle pixels, which are partially transparent. For every pixel like this a full blending calculation is needed, and if at a slant, possibly involving multiple pixels depending on render strategy. This is distinctly the most expensive operation out of all of the ones shown here.

Upvotes: 5

Related Questions