Reputation: 157
I want to realize an algorithm in GPU using CUDA. At the same time, I write a CPU edition using C++ to verify the results of GPU edition. However I got into trouble when using log()
in CPU and GPU. A very simple piece of algorithm (used both on CPU and GPU) is shown below:
float U;
float R = U * log(U);
However, when I compare the results on CPU side, I find that there are many results (459883 out of 1843161) having small differences (max dif is 0.5). Some results are shown below:
U -- R (CPU side) -- R (GPU side) -- R using Python (U * math.log(U))
86312.0 -- 980998.375000 -- 980998.3125 -- 980998.3627440572
67405.0 -- 749440.750000 -- 749440.812500 -- 749440.7721980268
49652.0 -- 536876.875000 -- 536876.812500 -- 536876.8452369706
32261.0 -- 334921.250000 -- 334921.281250 -- 334921.2605240216
24232.0 -- 244632.437500 -- 244632.453125 -- 244632.4440747978
Can anybody give me some suggestions? Which one should I trust?
Upvotes: 2
Views: 895
Reputation: 9512
By curiosity, I compared the last bit set in the significand (or in other words the number of trailing zeros in the significand)
I did it with Squeak Smalltalk because I'm more comfortable with it, but I'm pretty sure you can find equivalent libraries in Python:
CPU:
#(980998.375000 749440.750000 536876.875000 334921.250000 244632.437500)
collect: [:e | e asTrueFraction numerator highBit].
-> #(23 22 23 21 22)
GPU:
#(980998.3125 749440.812500 536876.812500 334921.281250 244632.453125)
collect: [:e | e asTrueFraction numerator highBit].
-> #(24 24 24 24 24)
That's interestingly not as random as we could expect, especially the GPU, but there is not enough clue at this stage...
Then I used an ArbitraryPrecisionFloat package to perform (emulate) the operations in extended precision, then round to nearest single precision float, the correct answer matches quite exactly the one of CPU:
#( 86312 67405 49652 32261 24232 ) collect: [:e |
| u r |
u := e asArbitraryPrecisionFloatNumBits: 80.
r = u*u ln.
(r asArbitraryPrecisionFloatNumBits: 24) asTrueFraction printShowingMaxDecimalPlaces: 100]
-> #('980998.375' '749440.75' '536876.875' '334921.25' '244632.4375')
It works as well with 64 bits.
But if I emulate the operations in single precision, then I can say the GPU matches the emulated results quite well too (except the second item):
#( 86312 67405 49652 32261 24232 ) collect: [:e |
| u r |
u := e asArbitraryPrecisionFloatNumBits: 24.
r = u*u ln.
r asTrueFraction printShowingMaxDecimalPlaces: 100]
-> #('980998.3125' '749440.75' '536876.8125' '334921.28125' '244632.453125')
So I'd say the CPU did probably use a double (or extended) precision to evaluate the log and perform the multiplication.
On the other side, the GPU did perform all the operations in single precision. Then the log function of ArbitraryPrecisionFloat package is correct to half ulp, but that's not a requirement of IEEE 754, so that can explain the observed mismatch on second item.
You may try to write the code so as to force float (like using logf instead of log if it's C99, or use intermediate results float ln=log(u); float r=u*ln;) and eventually use appropriate compilation flags to forbid extended precision (can't remember, I don't use C every day). But then you have very few guaranty to obtain 100% match on log function, the norms are too lax.
Upvotes: 3
Reputation: 80325
Which one should I trust?
You should trust the double-precision result computed by Python, that you could also have computed with CUDA or C++ in double-precision to obtain very similar (although likely not identical still) values.
To rephrase the first comment made by aland, if you care about an error of 0.0625 in 980998, you shouldn't be using single-precision in the first place. Both the CPU and the GPU result are “wrong” for that level of accuracy. On your examples, the CPU result happens to be more accurate, but you can see that both single-precision results are quite distant from the more accurate double-precision Python result. This is simply a consequence of using a format that allows 24 significant binary digits (about 7 decimal digits), not just for the input and the end result, but also for intermediate computations.
If the input is provided as float
and you want the most accurate float
result for R
, compute U * log(U)
using double
and round to float
only in the end. Then the results will almost always be identical between CPU and GPU.
Upvotes: 5