Gabriel Diego
Gabriel Diego

Reputation: 1020

Which precision is used for intermediate results of integer operation in C language?

Let's say I have the following variables and the following equation:

int16_t a,b,c,d;
int32_t result;

result = a*b - c*d;

The intermediate results of ab and cd will be stored in 16 bits or in 32 bits?

PS: I can test this faster than I can write the question, I want to know what does the C specification say.

Upvotes: 2

Views: 709

Answers (2)

Keith Thompson
Keith Thompson

Reputation: 263557

I will be updating this answer soon. I no longer believe that the standard permits int16_t to be promoted to long. But it can, in some extremely obscure cases, promote to unsigned int. The integer conversion rank rules have some odd results for exotic systems.

chux's answer is practically correct. There are a couple of obscure and unlikely cases where the intermediate result is of type long int.

int16_t is required to be a 16-bit 2's-complement integer type with no padding bits. Operands of type int16_t will be promoted to a type that can represent all possible values of type int16_t and that is at least as wide as int.

The standard requires int to have a range of at least -32767 to +32767.

Suppose int is represented using 1's-complement or sign-and-magnitude or that it's represented using 2's-complement but the representation that would normally be -32768 is treated as a trap representation. Then int cannot hold all values of type int16_t, and the operands must be promoted to long (which is guaranteed to have a wide enough range).

For this to happen, the implementation would have to support both an int type with a restricted 16-bit range (most likely not 2's-complement) and a type that's suitable for int16_t, meaning it has a 2's-complement representation and no padding bits or trap representations. A 1's-complement system, for example, would be more likely not to have such a type, and so it would not define int16_t at all (though it would define int_fast16_t and int_least16_t).

For practically all real-world implementations, int can hold all values of type int16_t, so the intermediate results are of type int. For practically all remaining real-world or hypothetical systems, int16_t would not be provided. For the hypothetical tiny fraction of nonexisting but conforming C implementations, the intermediate results are of type long int.

UPDATE : chux points out a possible weakness in my argument. In a comment, he argues that N1570 6.2.6.2 paragraph 2, which says that integer types may be represented using either two's complement, ones' complement, or sign and magnitude, is intended to require that all integer types use the same representation (differing in number of bits, of course, but all using the same one of those there choices).

The phrasing of the non-normative text in J.3.5, saying that:

Whether signed integer types are represented using sign and magnitude, two’s complement, or ones’ complement, and whether the extraordinary value is a trap representation or an ordinary value

is implementation-defined, tends to support that in interpretation. If different integer types can differ in that respect, it should say that for each integer type.

However:

  1. 6.2.6.2p2 doesn't explicitly say that all integer types must use the same representation, and I'm not aware of anything else that implies that they must do so.

  2. It could be useful to support integer types with different representations. For example, the hardware might support ones' complement, but the implementation might support two's complement integers in software for the sake of code that depends on int16_t et al. Or the hardware might directly support both representations. (My guess is that the authors didn't consider this possibility, but we can only go by what the standard actually says.)

  3. In any case, it's not actually necessary to invoke non-two's-complement representations to construct a case where int16_t promotes to long int.

Assume the following:

  • All integer types use two's complement representation.
  • INT_MAX == +32767
  • INT_MIN == -32767
  • An int with sign bit 1 and all value bits 0 is a trap representation.

int has no padding bits, but the bit pattern that would normally represent -32768 is a trap representation. This is explicitly permitted by N1570 6.2.6.2 paragraph 2.

But the range of int16_t must be -32768 to +32767. 7.20.2.1 says that INTN_MIN is required to be exactly -(2N-1) (N==16 in this case).

So under this almost entirely implausible but conforming implementation, int16_t is defined, but int cannot represent all values of int16_t, so int16_t values are promoted to long int (which must be at least 32 bits).

Upvotes: 3

chux
chux

Reputation: 154176

The intermediate results will be of type int.

Any type narrower than int, will first be promoted. These integer promotions are to types int or unsigned **. Math therefore must occur at either int, unsigned or the original type.

int16_t is certainly narrower or the same as int.

The type of result is irrelevant to the type of the intermediate results

int16_t a,b,c,d;
int32_t result = a*b - c*d;

To make portable for all platforms, including those with a int narrower that int32_t, insure the products are calculated using at least 32-bit math.

#include <stdint.h>
int32_t result = INT32_C(1)*a*b - INT32_C(1)*c*d;

Of course the result is store as 32-bit, possible sign extending the int intermediate results.

With machines with 32 or 64-bit int, the intermediate results would fit in int32_t with no change in value with no exception. Results are -2147450880 to 2147450880 (80008000 to 7FFF8000).

** Never long, not even on unicorn platforms.

Upvotes: 4

Related Questions