Reputation: 433
Im trying to understand the following piece of code:
int omgezetteTijd =
((0xFF & rekenOmNaarTijdArray[0]) << 24) | ((0xFF & rekenOmNaarTijdArray[1]) << 16) |((0xFF & rekenOmNaarTijdArray[2]) << 8) | (0xFF & rekenOmNaarTijdArray[3]);
What I do not understand is why do you AND it with OxFF, you're ANDING an 8 bit value with 8 bits like so (11111111), so this should give you the same result.
But, when I do not AND it with OxFF I'm getting negative values? Cant figure out why this is happening?
Upvotes: 1
Views: 1647
Reputation: 421040
When you or a byte with an int the byte will be promoted to an int. By default this is done with sign extension. In other words:
// sign bit
// v
byte b = -1; // 11111111 = -1
int i = (int) b; // 11111111111111111111111111111111 = -1
// \______________________/
// sign extension
By doing & 0xFF
you prevent this. I.e.
// sign bit
// v
byte b = -1; // 11111111 = -1
int i = (int) (0xFF & b); // 00000000000000000000000011111111 = 255
// \______________________/
// no sign extension
Upvotes: 5
Reputation: 198133
0xFF as a byte
represents the number -1. When it's converted to an int
, it is still -1, but that has the bit representation 0xFFFFFFFF
, because of sign extension. & 0xFF
avoids this, treating the byte as unsigned when converting to an int.
Upvotes: 3