Mohamed Sayed
Mohamed Sayed

Reputation: 93

How many bytes in " signed short " data type in c ?

Firstly,I tried sizeof(signed short), the output was 2 bytes.

BUT When I tried to check by the hex representation of the signed short, it turns out to be 4 bytes :

#include <stdio.h>
 void main(){
 signed short  test  ;
 test=-17; /* any number */
 printf(" -17 when viewed as signed short Hexa is \t %x\n ", test);
 }`

The output :

-17 when viewed as signed short Hexa is      ffffffef

ffffffef means 32 bits not 16 bits !

Upvotes: 2

Views: 2744

Answers (3)

rici
rici

Reputation: 241671

printf is a varargs function; it's prototype is:

int printf(const char *format, ...);

That means that the first argument has type const char*, and the remaining arguments do not have a specified type.

Arguments without a specified type undergo the default argument promotions:

  1. float arguments are converted to double.

  2. integer types (both signed and unsigned) which are strictly narrower than an int are converted to a signed int.

  3. all other arguments are unchanged.

This happens before printf is called, and is not in any way specific to printf. The arguments in a call to any varargs function (which has an ... in its prototype).

Varargs functions have no way of knowing the types of their arguments, so they need to have some convention which lets the caller tell the function what types to expect. In the case of printf, the types are specified in the format string, and printf uses the specified type in the format string in the expectation that it is correct. If you lie to printf by telling it that an argument is of a certain type when it is actually of a different type, the resulting behaviour is undefined, and is occasionally catastrophic (although usually it just means that the wrong thing is printed.)

Printf is aware of the default argument promotions. So if you tell it to expect an unsigned short, for example, then it will actually expect either an int (if int is wider than unsigned short) or unsigned int if int and short are the same size.

The type of a format item is specified using the format code (such as d and x, which are signed and unsigned int respectively) and possibly modifiers. In particular, the modifier h changes the expectation from int to short, while hh changes it to char. (It doesn't affect the signedness.)

So, if you provide a signed short, then you should use the format code %hd. If you provide an unsigned short, you should use %hu or %hx, depending on whether you want the output in decimal or hexadecimal. There is no way to specify a hexadecimal conversion of a signed argument, so you need to cast the argument to an unsigned type in order to use the hexadecimal format code:

printf("This signed short is printed as unsigned: %hx\n",
       (unsigned short)x);

That will first convert x from short to unsigned short; then (because of the default argument promotions, and assuming short is actually shorter than int) to int. That int is what is actually sent to printf. When printf sees the %hx format code, it knows that it should expect the default promotion of an unsigned short (that is, an int); it takes that int and converts it back to an unsigned short, which it then prints.

Many programmers would just write %x without a cast, just as you did. Technically, that is undefined behaviour and the rather wordier expression above is correct but pedantic. However, it is worth noting that it produces the expected value, whereas the incorrect %x format code without the cast does not.

Upvotes: 2

Frankie_C
Frankie_C

Reputation: 4877

The specifier %x requires an integer, and the compiler automatically converts your short to int before to print it.
If your type is a signed type the compiler performs also sign extension, this means that if the value is negative it will propagate to the upper word of int. I.e. if we have -17=0xffef for a short converting it to int we will have oxffffffef. If we had a positive value like 17=0x11, will be 0x11 for an int too and will print out as 0x11.
So the test you made makes no sense.
On the other hand the size of a type (int, short, etc) is compiler and/or machine dependent, and using the sizeof() operator is the correct way to check its size.

Upvotes: 2

conradkleinespel
conradkleinespel

Reputation: 6987

Wikipedia has some nice tables that show the different min/max sizes of different types of integers, including short int:

Short signed integer type. Capable of containing at least the [−32767, +32767] range; thus, it is at least 16 bits in size.

This means that you should consider short int to be no larger than 16 bits if you want your code to be cross platform and cross compiler. But you should still use sizeof(short int) if you need the actual size (for calculating data length at runtime) because a short int might be 32 bits on some platforms.

Upvotes: 1

Related Questions