Reputation: 6651
I have following 2 blocks of code which aims to infer the type assigned to var
keyword by the compiler.
var b = 0x80000000 - 0x1;
Console.WriteLine("b: {0}", b);
Console.WriteLine("b.GetType()={0}", b.GetType());
uint val1 = 0x80000000;
int val2 = 0x1;
var c = val1 - val2;
Console.WriteLine("c: {0}", c);
Console.WriteLine("c.GetType(): {0}", c.GetType());
Output:
b: 2147483647 //Result of UInt32-Int32
//(as compiler assigns datatype in the
//order of int, uint, long and ulong)
b.GetType()=System.UInt32 //OK
c: 2147483647 //Uint32-Int32
c.GetType(): System.Int64 //Not Understood, why not UInt32 ?
If var b
and var c
have almost same initialization- where var c
is even explicit, then why does it give unexpected data-type System.Int64 ?
Upvotes: 1
Views: 148
Reputation: 19149
Because
var b = 0x80000000 - 0x1;
is already Computed. by Optimizations
But
var val1 = 0x80000000;
var val2 = 0x1;
var c = val1 - val2;
is Not computed yet. and compiler guess that the val1
and val2
may be changed later...
const uint val1 = 0x80000000;
const int val2 = 0x1;
var c = val1 - val2;
c
is UInt32
now because compiler compute it and knows the resault.
Because val1
and val2
are constant and Compiler knows that they will not get changed. so there is no need for Int64
any more
Upvotes: 3
Reputation: 203
The problem is that when you do an operation between int
and uint
, uint
is being transformed in a signed number.
Thus, in order to uint
to be capable of store all its information (from 0 to 23 - 1) in a unsigned number, it must be casted to long
(from -263 to 263 - 1), since the range of int
is from -231 to 231 - 1.
Upvotes: 1
Reputation: 28
Compiler automatically expands type to 64 bits as this is the only integer that can hold result of operation between UInt32
and Int32
. Try changing to
ulong val1 = 0x80000000;
long val2 = 0x1;
and you will see a compilation error as compiler can't find a type to hold the result.
Int64
is not the inferred type of b
because compiler detects that constants fall into Int32
range. Try
var b = 0x800000000000 - 0x1;
and you will see inferred type long
Upvotes: -1