Reputation: 2583
The de-normalized numbers are where the mantissa does not have an implicit 1. They are of the 0.f form where f is the mantissa stored in the floating point number.
What I understand so far is that denormal numbers are undesired. My question is, why can they arise during calculation and why not just avoid them in the first place since we have so much range in floating point numbers?
Upvotes: 0
Views: 521
Reputation: 153517
My question is, why can they arise during calculation
Denormal numbers, better called subnormal numbers are very useful in subtraction.
Consider the 2 smallest normal numbers. What is their difference?
diff = smallest_normal_next - smallest_normal
With subnormal numbers, diff
is non-zero*1.
Without subnormal numbers, diff
is typically rounded to 0.0.
The helps avoid a whole class of problems that plagued early programing like:
if (a ≠ b) then // Prevent division by zero.
d = c/(a-b) // Fails to always prevent division by zero without subnormals.
endif
This gradual loss of precision of subnormals eases the math transition of small floating point normal numbers to 0.0.
*1 The difference of any 2 different valued finite floating point values (normal or subnormal) is then non-zero. Without subnormals, math risks a difference of 2 different values as zero - depending rounding.
Upvotes: 1