Reputation: 8924
I have two console apps (msvc 2008). When they have division by zero, they behave differently. My questions are below.
a) In one app, result of division by zero shows as 1.#INF000000000000 as debugger.
Printf "%4.1f" prints it as "1.$".
b) In another app, result of division by zero 9.2559631349317831e+061 in debugger.
Printf "%4.1f" prints it as "-1.$".
Why
neither app has exception or signal on div by zero ?
Isn't exception/signal a default dehavour ?
What are define
names for the two constants above ?
Generally, If I check for denominator == 0 before div, then which define
value shall I use for dummy result ? DBL_MIN ok ? I found that NaN value is not.
Can I tell stdio how to format one specific double value as char string I tell it? I realize it's too much to ask. But it would be nice to tell stdio to print, say, "n/a" for vaues DBL_MIN in my app, as example.
How shall I approach, for best portability, division-by-zero and printing it's results ? By printing, I mean "print number as 'n/a' if it's a result of division by zero".
What is not clear here to me, how shall I represent result of div-by-zero in one double, in a portable way.
Why two different results? It is compiler options ?
Compiler is C++, but used very much like C. Thanks.
Upvotes: 1
Views: 736
Reputation: 26154
When doing floating-point division by zero, the result should be infinity (represented with a special bit pattern).
My guess is that the second application does not actually perform a division by zero, but rather a division with a really small number. You can check this by inspecting the underlying representation, either in a debugger or by trace output (you can access it by placing it in a union of the floating-point type and an integer of the same size). Note that simply printing it might not reveal this, as the print algorithm sometimes print really small numbers as zero.
Upvotes: 1