brainydexter
brainydexter

Reputation: 20356

why is this method for computing sign of an integer architecture specific

From this link here to compute the sign of an integer

int v;      // we want to find the sign of v
int sign;   // the result goes here 

sign = v >> (sizeof(int) * CHAR_BIT - 1);
// CHAR_BIT is the number of bits per byte (normally 8)

If I understand this correctly, if sizeof(int) = 4 bytes => 32 bits

MSB or 32nd bit is reserved for the sign. So, we right shift by (sizeof(int) * CHAR_BIT - 1) and all the bits fall off from the right side, leaving only the previous MSB at index 0. If MSB is 1 => v is negative otherwise it is positive.

Is my understanding correct ?

If so, then can someone please explain me what author meant here by this approach being architecture specific:

This trick works because when signed integers are shifted right, the value of the far left bit is copied to the other bits. The far left bit is 1 when the value is negative and 0 otherwise; all 1 bits gives -1. Unfortunately, this behavior is architecture-specific.

How will this be any different for a 32 bit or 64 bit architecture ?

Upvotes: 0

Views: 198

Answers (2)

Pete Becker
Pete Becker

Reputation: 76315

It's "architecture-dependent" because in C++ the effect of a right shift of a negative value is implementation defined (in C it produces undefined behavior). That, in turn, means that you cannot rely on the result unless you've read and understood your compiler's documentation of what it does. Personally, I'd trust the compiler to generate appropriate code for v < 0 ? -1 : 0.

Upvotes: 0

Mats Petersson
Mats Petersson

Reputation: 129374

I believe that the "architecture dependent" is based on what sorts of shift operations the processor supports. x86 (16, 32 and 64-bit modes) support an "arithmetic shift" and "logical shift". The arithmetic variant copies the top bit of the shifted value down along as it shifts, the logical shift does not, it fills with zeros.

However, to avoid the compiler having to generate code along the lines of:

int temp = (1 << 31) & v; 
sign = v;
for(i = 0; i < 31; i++)
  sign = temp | (sign >> 1);

to avoid the problem of an architecture that ONLY has the "logical" shift.

Most architectures have both variations, but there are processors that don't. (Sorry, I can't find a reference that shows which processors has and hasn't got two variants of shift).

There may also be issues with 64-bit machines that can't distinguish between 64 and 32 bit shifts, and thus shift in the upper 32 bits from the number, rather than the lesser sign bit. Not sure if such processors exist or not.

The other part is of course to determine if the sign for -0 in a ones complement is actually a "0" or "-1" result in terms of sign. This really depends on what you are trying to do.

Upvotes: 3

Related Questions