Moeb
Moeb

Reputation: 10871

What am I missing in the following program?

#include<stdio.h>    
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23, 34, 12, 17, 204, 99, 16};

int main() {
    int d;

    for (d = -1; d <= (TOTAL_ELEMENTS - 2); d++)
        printf("%d\n", array[d + 1]);

    return 0;
}

Why is the for loop not run even once?

Upvotes: 2

Views: 284

Answers (5)

Sinan &#220;n&#252;r
Sinan &#220;n&#252;r

Reputation: 118156

You could

#define TOTAL_ELEMENTS ((int)(sizeof(array) / sizeof(array[0])))

assuming array will always be small enough. After all, you are using an integer to walk through the elements of the array.

Upvotes: 0

Chris Lutz
Chris Lutz

Reputation: 75439

Because TOTAL_ELEMENTS is an unsigned value (type size_t) and d is a signed value (type int, which is most likely signed on your platform, and you're certainly assuming it is even if it isn't). The compiler in this case is converting d to an unsigned value, and converting -1 to an unsigned value usually results in SIZE_MAX or something similar, which is certainly greater than TOTAL_ELEMENTS - 2. To do this correctly, cast the unsigned value to a signed value: (int)(TOTAL_ELEMENTS - 2).

Out of curiosity, why are you starting your index at -1 and then adding 1 to it in the loop? Why not just do this:

unsigned i;
for(i = 0; i < (TOTAL_ELEMENTS); i++)
    printf("%d\n", array[i]);

It would be much clearer than what you have.

Upvotes: 2

nullpointer
nullpointer

Reputation: 191

I'm not sure. Does sizeof do what you think it does here? It's been a while but I think you might be computing the size of an int* when you call sizeof (array), which would be 1 (1 byte). Dividing that by the size of an int, which is usually 4 bytes, would certainly mean your loop never runs.

edit: it seems more likely that d is being converted to an unsigned type. The other posters may be correct.

Upvotes: -2

Jerry Coffin
Jerry Coffin

Reputation: 490583

You're mixing signed and unsigned arithmetic. sizeof yields a size_t (an unsigned type). When you do the d <= (TOTAL_ELEMENTS -2) the d gets converted to unsigned, and then compared. Since -1 becomes the largest value in the target type when converted to unsigned, your condition becomes something like 0xffffffff <= 5, which is always false so the loop never executes.

Upvotes: 4

Heath Hunnicutt
Heath Hunnicutt

Reputation: 19467

The issue is that sizeof() returns size_t which is unsigned. Comparison of -1 with TOTAL_ELEMENTS - 2 should result in a warning indicating you have compared unsigned with signed. When this comparison happens, the -1 is converted to an unsigned value which is MAX_UINT. On a 32-bit platform, both -1 and MAX_UINT are 0xFFFFFFFF.

Your TOTAL_ELEMENTS() macro could incorporate a cast to (int) but that isn't technically correct because size_t has a larger value range than int. Best to change your loop variable so that it is declare as size_t and never becomes negative.

Upvotes: 18

Related Questions