jab
jab

Reputation: 4063

C unsigned integer subtraction, macros, and typecasting

Apologies for the Objective-C nature of the example code, but I'm pretty sure the answer to my question is within the C standard libraries and/or the Apple clang compiler.

I have an NSArray with a variable number of items in it. I want to use the item count to create a value between 1 and 3. I'm using the C MAX macro, but it has some strange behavior:

NSLog( @"%d %d %d %d", 1, [tasks count], 3 - [tasks count], MAX( 1, 3 - [tasks count] ) );

the output of this log statement when incrementing the number of items in tasks is this:

1 0 3 3
1 1 2 2
1 2 1 1
1 3 0 1
1 4 -1 -1

I dug into the docs a little and found that the count function is returning a NSUInteger. The solution to my dilemma is just typecasting the return value as NSInteger:

NSLog( @"%d %d %d %d", 1, (NSInteger)[tasks count], 3 - (NSInteger)[tasks count], MAX( 1, 3 - (NSInteger)[tasks count] ) );

1 0 3 3
1 1 2 2
1 2 1 1
1 3 0 1
1 4 -1 1

(If you're unfamiliar with Objective-C, on a 32-bit architecture NSInteger is typedef'd to int and NSUInteger is unsigned int.)

I'm struggling to understand the typecasting that implicitly happened in my original code, leading to my unintuitive result. Can anybody illuminate?

Upvotes: 0

Views: 986

Answers (2)

Dietrich Epp
Dietrich Epp

Reputation: 213388

Turn on your compiler warnings. As you say, the Objective C bit is irrelevant, so I will refer to C from here on out.

Let's say your MAX() macro is defined this way (using GCC extensions):

#define MAX(x, y) ({ typeof (x) _x = (x); \
                     typeof (y) _y = (y); \
                     _x > _y ? _x : _y })

(The MAX() macro is nonstandard

When you compute MAX(1, 3 - [tasks count]), you get:

int x = 1;
unsigned y = 3 - [tasks count];
x > y ? x : y;

Now, it is possible to properly compare an int and an unsigned in C, but that's not the behavior you get when using x > y in C. Instead, both operands are converted to unsigned (due to the "usual arithmetic conversions").

So your comparison is (unsigned) 1 > (unsigned) -1, which is false, because (unsigned) -1 is the largest possible unsigned, which is 0xffffffff on most systems (both 32 and 64 bit).

Where is the error?

NSLog(@"%d", [tasks count]);

This is technically wrong, you are passing unsigned to NSLog() but using the %d format specifier which is for int. Use %u instead, or cast your values to int first.

Compiler warnings

The compiler will emit two warnings:

  • It may tell you that you are comparing a signed against an unsigned when you compute MAX(1, 3 - [tasks count]).

  • It will tell you that you are passing an unsigned to NSLog() when the format expects an int.

Upvotes: 2

nos
nos

Reputation: 229158

3 - [tasks count] is done with unsigned types, so the subtraction wraps around, and becomes 0xffffffff. So you get MAX(1, 0xffffffff) with unsigned types.

However, you print this out as a signed integer, since your NSLog formatting string is "%d", it will make NSLog treat the bits of the arguments as it was a signed int, and the bit pattern 0xfffffff is -1.

Upvotes: 2

Related Questions