Reputation: 7490
So many arithmetic questions on SO, specially floating point hence I am merging my 2 questions together.
0.3333333
> double 0.333333333333333
?Here is the program that prove it.
static void Main(string[] args)
{
int a = 1;
int b = 3;
float f = Convert.ToSingle(a) / Convert.ToSingle(b);
double db = Convert.ToDouble(a) / Convert.ToDouble(b);
Console.WriteLine(f > db);
Console.Read();
}
float
can't be implicitly converted to decimal
but int
can?e.g.
decimal d1 = 0.1f; //error
decimal d2 = 1; //no error
Upvotes: 0
Views: 359
Reputation: 222900
When numerals in source text, such as .3333333
are converted to floating-point, they are rounded to the nearest representable value. It so happens that the nearest float
value to .3333333
is slightly greater than .3333333; it is 0.333333313465118408203125, while the double
value nearest to 0.333333333333333 is slightly less than that; it is 0.333333333333332981762708868700428865849971771240234375. Since 0.333333313465118408203125 is greater than 0.333333333333332981762708868700428865849971771240234375, f > db
evaluates to true.
I am unfamiliar with the rules of C# and its decimal type. However, I suspect the reason decimal d1 = 0.1f;
is disallowed while decimal d2 = 1;
is allowed is that not all float
values can be converted to decimal
without error, while all int
values can be converted to decimal
without error. According to Microsoft, decimal
uses a significand of 96 digits, which suffices to represent any int
exactly. However, it has smaller range than float
, with its largest finite value being 296−1, around 7.9228•1028. The largest finite float
is 2128−2104, around 3.4028•1038.
Upvotes: 2
Reputation: 271735
For your first question, float
s are actually converted to double
s when you use the >
operator on them. If you print (double)f
, you'll see its value:
0.333333343267441
While db
is:
0.333333333333333
For the second question, although there isn't an implicit conversion from float
to decimal
, there is an explicit one, so you can use a cast:
float a = 0.1f;
decimal d = (decimal)a;
I can't find anything in the language spec as to why this is, but I speculate that this conversion isn't something that you should do, so you need to be explicit about it. Why shouldn't you do this? Because decimal
is supposed to represent discrete amounts like currency, while float
and double
are supposed to represent continuous amounts. They represent two very different things.
Upvotes: 5