Utkarsh Malviya
Utkarsh Malviya

Reputation: 73

Since characters from -128 to -1 are same as from +128 to +255, then what is the point of using unsigned char?

#include <stdio.h>
#include <conio.h>
int main()
{
    char a=-128;
    while(a<=-1)
    {
        printf("%c\n",a);
        a++;
    }
    getch();
    return 0;
}

The output of the above code is same as the output of the code below

#include <stdio.h>
#include <conio.h>
int main()
{
    unsigned char a=+128;
    while(a<=+254)
    {
        printf("%c\n",a);
        a++;
    }
    getch();
    return 0;
}

Then why we use unsigned char and signed char?

Upvotes: 4

Views: 438

Answers (5)

nalzok
nalzok

Reputation: 16147

Different types are created to tell the compiler how to "understand" the bit representation of one or more bytes. For example, say I have a byte which contains 0xFF. If it's interpreted as a signed char, it's -1; if it's interpreted as a unsigned char, it's 255.

In your case, a, no matter whether signed or unsigned, is integral promoted to int, and passed to printf(), which later implicitly convert it to unsigned char before printing it out as a character.

But let's consider another case:

#include <stdio.h>
#include <string.h>

int main(void)
{
    char a = -1;
    unsigned char b;
    memmove(&b, &a, 1);
    printf("%d %u", a, b);
}

It's practically acceptable to simply write printf("%d %u", a, a);. memmove() is used just to avoid undefined behaviour.

It's output on my machine is:

-1 4294967295

Also, think about this ridiculous question:

Suppose sizeof (int) == 4, since arrays of characters (unsigned char[]){UCHAR_MIN, UCHAR_MIN, UCHAR_MIN, UCHAR_MIN} to (unsigned char[]){UCHAR_MAX, UCHAR_MAX, UCHAR_MAX, UCHAR_MAX} are same as unsigned ints from UINT_MIN to UINT_MAX, then what is the point of using unsigned int?

Upvotes: 0

chux
chux

Reputation: 154562

With printing characters - no difference:

The function printf() uses "%c" and takes the int argument and converts it to unsigned char and then prints it.

char a;
printf("%c\n",a);  // a is converted to int, then passed to printf()
unsigned char ua;
printf("%c\n",ua); // ua is converted to int, then passed to printf()

With printing values (numbers) - difference when system uses a char that is signed:

char a = -1;
printf("%d\n",a);     // --> -1
unsigned char ua = -1;
printf("%d\n",ua);    // --> 255  (Assume 8-bit unsigned char)

Note: Rare machines will have int the same size as char and other concerns apply.

So if code uses a as a number rather than a character, the printing differences are significant.

Upvotes: 2

WiSaGaN
WiSaGaN

Reputation: 48127

Because unsigned char is used for one byte integer in C89.

Note there are three distinct char related types in C89: char, signed char, unsigned char.

For character type, char is used.

unsigned char and signed char are used for one byte integers like short is used for two byte integers. You should not really use signed char or unsigned char for characters. Neither should you rely on the order of those values.

Upvotes: 2

Edwin Buck
Edwin Buck

Reputation: 70979

The bit representation of a number is what the computer stores, but it doesn't mean anything without someone (or something) imposing a pattern onto it.

The difference between the unsigned char and signed char patterns is how we interpret the set bits. In one case we decide that zero is the smallest number and we can add bits until we get to 0xFF or binary 11111111. In the other case we decide that 0x80 is the smallest number and we can add bits until we get to 0x7F.

The reason we have the funny way of representing signed numbers (the latter pattern) is because it places zero 0x00 roughly in the middle of the sequence, and because 0xFF (which is -1, right before zero) plus 0x01 (which is 1, right after zero) add together to carry until all the bits carry off the high end leaving 0x00 (-1 + 1 = 0). Likewise -5 + 5 = 0 by the same mechanisim.

For fun, there are a lot of bit patterns that mean different things. For example 0x2a might be what we call a "number" or it might be a * character. It depends on the context we choose to impose on the bit patterns.

Upvotes: 2

Rahul Tripathi
Rahul Tripathi

Reputation: 172628

K & R, chapter and verse, p. 43 and 44:

There is one subtle point about the conversion of characters to integers. The language does not specify whether variables of type char are signed or unsigned quantities. When a char is converted to an int, can it ever produce a negative integer? The answer varies from machine to machine, reflecting differences in architecture. On some machines, a char whose leftmost bit is 1 will be converted to a negative integer ("sign extension"). On others, a char is promoted to an int by adding zeros at the left end, and thus is always positive. [...] Arbitrary bit patterns stored in character variables may appear to be negative on some machines, yet positive on others. For portability, specify signed or unsigned if non-character data is to be stored in char variables.

Upvotes: 3

Related Questions