Schrutefarms
Schrutefarms

Reputation: 31

Char multiplication in C

I have a code like this:

#include <stdio.h>
int main()
{
  char a=20,b=30;
  char c=a*b;
  printf("%c\n",c);
  return 0;
}

The output of this program is X .

How is this output possible if a*b=600 which overflows as char values lies between -128 and 127 ?

Upvotes: 2

Views: 840

Answers (4)

Andrey Derevyanko
Andrey Derevyanko

Reputation: 560

First, looks like you have unsigned char with a range from 0 to 255. You're right about the overflow.

600 - 256 - 256 = 88

This is just an ASCII code of 'X'.

Upvotes: -1

cadaniluk
cadaniluk

Reputation: 15229

First off, the behavior is implementation-defined here. A char may be either unsigned char or signed char, so it may be able to hold 0 to 255 or -128 to 127, assuming CHAR_BIT == 8.

600 in decimal is 0x258. What happens is the least significant eight bits are stored, the value is 0x58 a.k.a. X in ASCII.

Upvotes: 3

too honest for this site
too honest for this site

Reputation: 12263

Whether char is signed or unsigned is implementation defined. Either way, it is an integer type.

Anyway, the multiplication is done as int due to integer promotions and the result is converted to char.

If the value does not fit into the "smaller" type, it is implementation defined for a signed char how this is done. Far by most (if not all) implementations simply cut off the upper bits.

For an unsigned char, the standard actually requires (briefly) cutting of the upper bits.

So:

(int)20 * (int)20 -> (int)600 -> (char)(600 % 256) -> 88 == 'X'

(Assuming 8 bit char).

See the link and its surrounding paragraphs for more details.

Note: If you enable compiler warnings (as always recommended), you should get a truncation warning for the assignment. This can be avoided by an explicit cast (only if you are really sure about all implications). The gcc option is -Wconversion.

Upvotes: 3

MikeCAT
MikeCAT

Reputation: 75062

This code will cause undefined behavior if char is signed.

I thought overflow of signed integer is undefined behavior, but conversion to smaller type is implementation-defined.

quote from N1256 6.3.1.3 Signed and unsigned integers:

3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

If the value is simply truncated to 8 bits, (20 * 30) & 0xff == 0x58 and 0x58 is ASCII code for X. So, if your system do this and use ASCII code, the output will be X.

Upvotes: 1

Related Questions