Reputation: 470
Console.WriteLine(" 0.0 float: 0x00000000, {0}", (float)0x00000000);
Console.WriteLine("-0.0 float: 0x80000000, {0}", (float)0x80000000);
C# prints 0.0 for the first one, and instead of -0.0 prints 2.147484E+09 which is not what I expected. According to this online IEEE-754 floating-point format converter 0x80000000 should be a -0.0, since floating-point format uses sign-magnitude format, which supports -0.0.
https://www.h-schmidt.net/FloatConverter/IEEE754.html
How do we get C# to print -0.0 for UInt32 bit pattern of 0x80000000?
Upvotes: 1
Views: 910
Reputation: 8553
When you do (float)0x80000000
you are taking an integer value (0x80000000 = 2.147484E+09) and casting it to float (the value is the same, but now it's a float). What you want is take the byte array and interpret it as float:
uint l = (uint)0x80000000;
float f = BitConverter.ToSingle(BitConverter.GetBytes(l), 0);
Console.WriteLine("-0.0 float: 0x80000000, {0}", f);
Output:
-0.0 float: 0x80000000, 0
Upvotes: 3
Reputation: 1293
What I'm after is converting the UInt32 pattern of 0x80000000 into the correct value of -0.0f and printing this value as -0.0 – DragonSpit
I have a bad news, apparently there's no way to get what you want:
By PETER SESTOFT: Some of the confusion over negative zero may stem from the fact that the current implementations of C# print positive and negative zero in the same way, as 0.0, and no combination of formatting parameters seems to affect that display. Although this is probably done with the best of intentions, it is unfortunate. To reveal a negative zero, you must resort to strange-looking code like this, which works because 1/(-0.0) = -Infinity < 0:
public static string DoubleToString(double d) {
if (d == 0.0 && 1/d < 0)
return "-0.0";
else
return d.ToString();
}
Upvotes: 2