Reputation: 595
I am trying to convert an integer into a float like this (simplified):
int64_t x = -((int64_t)1 << 63);
float y = x;
With MSVC 2013 on a 64-bit Windows 7 this works just fine, but with gcc 4.8 on Ubuntu 14.04 64-bit I get a positive value for x. I disabled all optimizations and looked at the variables in gdb. I even tried evaluating with gdb directly in order to find the cause of the problem:
(gdb) print (float)(-((int64_t)1 << 63))
$33 = 9,22337204e+18
(gdb) print (float)(-9223372036854775808)
$39 = 9,22337204e+18
As can be seen, not even adding explicit casts solves the problem. I am a bit confused since float
should be able to hold much larger numbers (in terms of absolute value). sizeof(float) == 4
and sizeof(size_t) == 8
in case it matters. It seems that the value -2^63 is some magic limit, since -2^63+1 is converted perfectly fine:
(gdb) print (float)(-((int64_t)1 << 63) + 1)
$44 = -9,22337149e+18
What is the reason that the sign is lost in the conversion for values <=-2^63? The value -2^63 can be represented by int64_t as well as float; and it works on other platforms as described above.
Upvotes: 3
Views: 507
Reputation: 80276
To avoid undefined behavior, multiply by powers of two using the ldexp
standard function: - ldexp(1.0, 63)
.
Upvotes: 2
Reputation: 34585
The instruction (int64_t)1 << 63
is shifting a 1
into the sign bit, therefore is Undefined Behaviour.
Even if the shifting was successful and gave 0x8000000000000000
that is the minimum (and negative) value that can be supported, so then negating the value with
-((int64_t)1 << 63)
puts the value out of range of a positive signed 64 bit int
.
Upvotes: 5