Reputation: 1732
I was playing with decimal today. I noticed this:
Decimal.MaxValue
79228162514264337593543950335
Decimal.MaxValue - 0.5m
79228162514264337593543950334
The following code prints true.
static void Main(string[] args)
{
decimal d = Decimal.MaxValue - 0.5M;
var b = d % 1 == 0;
Console.WriteLine(b);
}
I am sure there is a reason behind this but I don't know what it is.
Upvotes: 7
Views: 6593
Reputation: 499142
It is not an integer, it is in fact a Decimal
.
Integers (Int32) can have values from negative 2,147,483,648 positive 2,147,483,647.
As you can see, this is well out of these ranges.
In term of the display and accuracy, 79228162514264337593543950334.5
cannot be represented by Decimal
.
Read this for more details (What Every Computer Scientist Should Know About Floating-Point Arithmetic).
Upvotes: 0
Reputation: 44317
The decimal type uses 96 bits to store the sequence of digits (ref), plus a sign (1 bit) and a scaling factor that specifies the location of the decimal place.
For this decimal number:
79228162514264337593543950335
All 96 bits are used to the left of the decimal point - there's nothing left to represent the fractional part of the answer. So, it gets rounded.
If you divide the number by 10:
7922816251426433759354395033.5
Then you have a few bits available to represent the fractional part - but only to 1/10, no finer.
The key difference between decimal
and double
/float
is that it is based on a decimal scaling factor specifying the location of a decimal point; the other floating types are based on a binary scaling factor specifying the location of a binary point.
Upvotes: 13
Reputation: 20076
0.5
is being rounded before subtraction. decimal
strives to make the result as precise as possible, so the operation becomes 79228162514264337593543950335 - 00000000000000000000000000000.5
. But 0.5
cannot be represented as a decimal
of the required precision and is rounded upwards to 1
.
Upvotes: 1