Reputation: 468
I am confused in C++ specification and I don't understand the following (I found similar topics but not exactly my question)
Given the following source:
int16_t a = 10000;
int16_t b = 20000;
int32_t c = a * b;
std::cout << c << "\n";
int32_t ax = 1'000'000'000;
int32_t bx = 2'000'000'000;
int64_t cx = ax * bx;
std::cout << cx << "\n";
The result on a 64 bit CPU (tested with gcc and clang under Linux):
200000000
1321730048
I have multiple problems with this. The biggest is, why the compiler does not cut the first value to 16 bits.
The second is, if this is an intentional behavior and for optimization it just does not care, why is the second one cut on a 64 bit CPU.
I can check the assembly which is generated. But I am wondering, what is the specification, why do both compiler have the same behavior.
Upvotes: 5
Views: 131
Reputation: 63
Smaller data types like short int, char which take less number of bytes gets promoted to int or unsigned int as per C++ standard. But not the larger sized data types. This is called Integer promotions.
You can explicitly cast it to a larger type before performing the multiplication(or any arithmetic operation) to avoid this over flow like this:
#include <iostream>
int main()
{
int ax = 1000000000;
int bx = 2000000000;
long long cx = static_cast<long long>(ax) * bx;
std::cout << cx << "\n";
return 0;
}
It will ensure the correct output as 2000000000000000000.
Upvotes: 3
Reputation: 409422
The issue is about integer promotions, or the lack of it.
When a value of a type smaller than int
(like for example short
) is used in an expression, it will be promoted to an int
. This is what happens with a * b
.
Assuming 32-bit int
(i.e. both int
and int32_t
are aliases) then there's no truncation or other problem.
The issue with the second is that there's no promotion. You multiply two signed int
values to get a result that is too large to fit in an int
, and therefore have a signed integer arithmetic overflow, which leads to undefined behavior.
This implicit conversions reference is a good read.
Lastly a note about int16_t
etc., they are commonly aliases of the native integer types.
On most systems int16_t
is an alias for short
, int32_t
is an alias for int
and int64_t
is an alias for either long
or long long
.
Upvotes: 9