user1553924
user1553924

Reputation: 31

how do we get the following output?

#include <stdio.h>

int main(void)
{ 
    int i = 258;
    char ch = i;
    printf("%d", ch)
}

the output is 2!

How the range of variable works? what is the range of different data types in c langauge?

Upvotes: 2

Views: 136

Answers (7)

Jeyaram
Jeyaram

Reputation: 9504

You are using little-endian machine.

Binary representation of 258 is

00000000 00000000 00000001 00000010

while assigning integer to char, only 8 byte of data is copied to char. i.e LSB.

Here only 00000010 i.e 0x02 will be copied to char.

The same code will gives zero, in case of big-endian machine.

Upvotes: 0

TOC
TOC

Reputation: 4446

char is on 8 bits so, when you cast (you assign an integer to a char), in 32 bits machine, the i (int is on 32 bits) var is:

00000000 00000000 00000001 00000010 = 258 (in binary)

When you want a char from this int, you truncate the last 8 bits (char is on 8 bits), so you get:

00000010 which mean 2 in decimal, this is why you see this output.

Regards.

Upvotes: 1

md5
md5

Reputation: 23727

This is an overflow ; the result is undefined because char may be signed (undefined behavior) or unsigned (well-defined "wrap-around" behavior).

Upvotes: 0

Roman Saveljev
Roman Saveljev

Reputation: 2594

In order to find out how long various types are in C language you should refer to limits.h (or climits in C++). char is not guaranteed to be 8 bits long . It is just:

smallest addressable unit of the machine that can contain basic character set. It is an integer type. Actual type can be either signed or unsigned depending on implementation

Same sort of vague definitions are put for other types.

Alternatively, you can use operator sizeof to dynamically find out size of the type in bytes.

You may not assume exact ranges of native C data types. Standard places only minimal restrictions, so you can say unsigned short can hold at least 65536 different values. Upper limit can differ

Refer to Wikipedia for more reading

Upvotes: 1

Danil Speransky
Danil Speransky

Reputation: 30473

#include <stdio.h>

int main(void)
{ 
    int i = 258;
    char ch = i;
    printf("%d", ch)
}

Here i is 0000000100000010 on the machine level. ch takes 1 byte, so it takes last 8 bit, it is 00000010, it is 2.

Upvotes: 1

Sergey Kalinichenko
Sergey Kalinichenko

Reputation: 727077

char is 8-bit long, while 258 requires nine bits to represent. Converting to char chops off the most significant bit of 258 which is 100000010 in binary, resulting in 10, which is 2 in binary.

When you pass char to printf, it gets promoted to int, which is then picked up by the %d format specifier, and printed as 2.

Upvotes: 2

cnicutar
cnicutar

Reputation: 182774

When assigning to a smaller type the value is

  • truncated, i.e. 258 % 256 if the new type is unsigned
  • modified in an implementation-defined fashion if the new type is signed

Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

So all that fancy "adding or subtracting" means it is assigned as if you said:

ch = i % 256;

Upvotes: 3

Related Questions