Vishwadeep Singh
Vishwadeep Singh

Reputation: 1043

In C programming, will a calculation be slower, if any variable in the expression is nan

This is completely a hypothetical question.

In below code, I am performing calculation, which is having one variable,z , as a value assigned "nan". Will main calculation will be slower as compared to normal z value (like z = 1.0)

float z = 0.0/0.0; // that means z is "nan"
float p = 50.0, q = 100.0, r = 150.0;
// main calculation Type 1
float c = ((x*100)+(y*100))/(x*100)+(y*100)+152*(p+q/r)+z;

Here is an example to show the main calculation with normal z value

float z = 1.0; // normal value for z
float p = 50.0, q = 100.0, r = 150.0;
// main calculation Type 2
float c = ((x*100)+(y*100))/(x*100)+(y*100)+152*(p+q/r)+z;

Hence, which one is slower, Type 1 or Type 2? Or there will be no time difference? In single calculation, time difference might not be visible, but, if we collect millions of such equations, how time results will vary?

Any kind of thought, logic or information will be appreciated.

Note: Currently i am not concerned about the result value of variable 'c';

Upvotes: 5

Views: 476

Answers (4)

Emre Acar
Emre Acar

Reputation: 920

There wont be any difference.

At the low level, computer just assigns the value. Doesn't check what was in there before.

Calculations' assembly codes must be the same.

If you check the whole code, first program must be slower because you make an extra calculation in it. (0.0/0.0).

Upvotes: -1

rici
rici

Reputation: 241791

It depends on the CPU. Old x86 intel chips, without SSE, handled NAN very badly (see http://www.cygnus-software.com/papers/x86andinfinity.html for a 10-year-old analysis.) However, SSE2/3/4 doesn't suffer from this problem, and I don't believe AMD ever did.

You may well see this problem on modern chips if you tell your compiler to avoid SSE instructions for possible compatibility (but that will definitely slow down floating point regardless of NaNs, so don't do it unless you have to.) As far as I know, this compatibility mode is still the gcc default for x86 builds.

I don't know about ARM floating point units; you'd have to test.

I suspect that NANs are not the issue that they were a decade ago, but enough people have a vague memory of those times that the prejudice undoubtedly will live on for a while longer.

Upvotes: 5

markgz
markgz

Reputation: 6266

On some RISC architectures such as MIPS, the floating point unit is designed to handle the common case as fast as possible. The floating point unit traps when it encounters a NaN, then the rest of the calculation is handled using a software IEEE floating point emulator. So calculating with NaNs will be much slower on such an architecture. AFAIK this is not true for the Intel x86 architecture.

Upvotes: 1

I ate some tau
I ate some tau

Reputation: 115

Thinking critically is really making me challenge my understanding of C. I think the best answer is to experiment. Use time.h to save the clock time before and after each calculation and compute the difference.

#include<stdio.h>   
#include<time.h> 
clock_t t1, t2;
float result;

t1 = clock();
//perform calculation1
t2 = clock();
printf("%f", result);

t1 = clock();
//perform calculation2
t2 = clock();
printf("%f", result);

Upvotes: 1

Related Questions