Ed Swangren
Ed Swangren

Reputation: 124702

Differences in Overflow Semantics Between C# and Java

Take the following piece of code:

const float fValue = 5.5f;
const float globalMin = 0.0f;
const float globalMax = 5.0f;
float vFactor = (float)(2e9 / (globalMax - globalMin));
int iValue = (int)((fValue - globalMin) * vFactor); 

That last line results in a value which overflows an int. In C#, the result is unspecified:

In java... well, I don't know, that's why I'm here. I know how typical integer overflow (i.e., Integer.MAX_VALUE + 1) is handled, but I can't find anything in the spec which refers to overflow as the result of a conversion from a float.

In my tests (Java) the result of that last line is Integer.MAX_VALUE which tells me that something more is going on as I would expect it to be -2094967296 if the value simply rolled over. It looks like Java is truncating to MAX_VALUE upon overflow.

EDIT: Thanks to @Pascal Cuoq for pointing out that the SSE2 assembly instruction that truncates a float to int produces INT_MIN on overflow. I'm going to fix the bug on the C# side, but I am still curious as to where/if this behavior is specified by Java.

Upvotes: 2

Views: 155

Answers (1)

c.s.
c.s.

Reputation: 4816

Conversions between data types are well defined in Java Language Specification (see section Conversions and Promotions).

I believe your case falls under 5.1.3. Narrowing Primitive Conversion where you can see that:

  • First your float will be rounded to an integer using IEEE 754 round-toward-zero mode (explained here) and
  • It will be assigned the largest or smallest representable integer value depending on if the value is too small or too large.

Upvotes: 2

Related Questions