Karlovsky120
Karlovsky120

Reputation: 6352

GLSL log returning an undefined result

I'm trying to draw the mandelbrot set. I've created the algorithm on the CPU, but now I want to reproduce it on the GPU, but the code behaves differently.

In the CPU program, at one point, I take the std::abs(z), where z is a complex number, and write the value in the green channel on the screen.

On the GPU, I take the same z and call the following function (Vulkan, GLSL):

double module(dvec2 z) {
    return sqrt(z.x * z.x + z.y * z.y);
}

double color(z) {
    return module(z);
}

When I write color(z) into the green channel, I get the same exact picture as I get for the CPU program, so the code works exactly the same, at least up to that point.

Next, I changed the CPU code to instead take std::log(std::abs(z)) / 20 and put that in the green channel. This is the image I get (nubmers that are in the mandelbrot set are coloured white): enter image description here

You can see that the green is never clipped, so the result for the each pixel is somewhere in the range (0, 1).

I then changed the GPU code to this:

double module(dvec2 z) {
    return sqrt(z.x * z.x + z.y * z.y);
}

double color(z) {
    return log(module(z));
}

I wrote color(z) / 20 into the green channel. This is the resulting image: enter image description here

As you can see, the value of color(z) / 20 must be <=0. I tried changing the color function to this:

double color(z) {
    return -log(module(z));
}

To see if the value was 0 or negative. I still got the same image, so the value must be 0. To confirm this I change the code again, now to this:

double color(z) {
    return log(module(z)) + 0.5;
}

and wrote color(z) to the green channel (dropping the division by 20). I expected the result to be a medium green colour.

To my surprise, the image did not change, the pixels were still pitch black.

Perplexed, I reverted the change to the original:

double color(z) {
    return log(module(z));
}

but, I wrote color(z) + 0.5 into the green channel and I got this: enter image description here

To summarize, it seems that log(module(z)) is returning some undefined value. If you negate it or try to add anything to it, it remains undefined. When this value is return from a function that has a double as the return type, the value returned is 0, which can now be added to.

Why does this happen? The function module(z) is guaranteed to return a positive number so the log function should return a valid result. The definitions of both std::log and GLSL log are the natural logarithm of the argument, so the value should be exactly the same (ignoring the precision error).

How do I make GLSL log behave properly?

Upvotes: 1

Views: 193

Answers (1)

Karlovsky120
Karlovsky120

Reputation: 6352

It turns out that GPU doesn't really like when you ask it to calculate a log of a very large number. From what I gather, log (actually ln) is implemented as the taylor series. This is unfortunate because it contains polynomials to the n-th power for n members.

However, if you have a number represented as x = mantissa * 2^exp, you can get ln(x) from the following formula:

ln(x) = exp * ln(2) + ln(mantissa)

Whatever x is, mantissa should be significantly smaller. Here's a function for the fragment shader:

float ln(float z) {
    int integerValue = floatBitsToInt(z);
    int exp = ((integerValue >> mantissaBits) & (1 << expBits) - 1)
              - ((1 << (expBits - 1)) - 1);

    integerValue |= ((1 << expBits) - 1) << mantissaBits;
    integerValue &= ~(1 << (mantissaBits + expBits - 1));

    return exp * log2 + log(intBitsToFloat(integerValue));
}

Note that in GLSL this trick only works with floats - there is not 64bit integral type and thus not doubleBitsToLong or vice versa.

Upvotes: 1

Related Questions