cpd1
cpd1

Reputation: 789

Typecasting a negative number to positive in C

I wouldn't expect the value that gets printed to be the initial negative value. Is there something I'm missing for type casting?

#include<stdint.h>


int main() {

    int32_t color = -2451337;
    uint32_t color2 = (uint32_t)color;

    printf("%d", (uint32_t)color2);
    return 0;
}

Upvotes: 1

Views: 2164

Answers (2)

Keith Thompson
Keith Thompson

Reputation: 263267

int32_t color = -2451337;
uint32_t color2 = (uint32_t)color;

The cast is unnecessary; if you omit it, exactly the same conversion will be done implicitly.

For any conversion between two numeric types, if the value is representable in both types, the conversion preserves the value. But since color is negative, that's not the case here.

For conversion from a signed integer type to an unsigned integer type, the result is implementation-defined (or it can raise an implementation-defined signal, but I don't know of any compiler that does that).

Under most compilers, conversions between integer types of the same size just copies or reinterprets the bits making up the representation. The standard requires int32_t to use two's-complement representation, so if the conversion just copies the bits, then the result will be 4292515959.

(Other results are permitted by the C standard, but not likely to be implemented by real-world compilers. The standard permits one's-complement and sign-and magnitude representations for signed integer types, but specifically requires int32_t to use two's-complement; a C compiler for a one's complement CPU probably just would't define int32_t.)

printf("%d", (uint32_t)color2);

Again, the cast is unnecessary, since color2 is already of type uint32_t. But the "%d" format requires an argument of type int, which is a signed type (that may be as narrow as 16 bits). In this case, the uint32_t value isn't converted to int. Most likely the representation of color2 will be treated as if it were an int object, but the behavior is undefined, so as far as the C standard is concerned quite literally anything could happen.

To print a uint32_t value, you can use the PRId32 macro defined in <inttypes.h>:

printf("%" PRId32, color32);

Or, perhaps more simply, you can convert it to the widest unsigned integer type and use "%ju":

printf("%ju", (uintmax_t)color32);

This will print the implementation-defined value (probably 4292515959) of color32.

And you should add a newline \n to the format string.

More quibbles:

You're missing #include <stdio.h>, which is required if you call printf.

int main() is ok, but int main(void) is preferred.

Upvotes: 3

Bob Moore
Bob Moore

Reputation: 6894

You took a bunch of bits (stored in signed value). You then told the CPU to interpret that bunch of bits as unsigned. You then told the cpu to render the same bunch of bits as signed again (%d). You would therefore see the same as you first entered.

C just deals in bunches of bits. If the value you had chosen was near the representational limit of the type(s) involved (read up on twos-complement representation), then we might see some funky effects, but the value you happened to choose wasn't. So you got back what you put in.

Upvotes: 4

Related Questions