Reputation: 163
Consider the following program:
#include <stdio.h>
#include <stdint.h>
int main()
{
uint16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("%i", result);
return 0;
}
This prints the value 65535, which is what I expect after having read this post: si
is converted to ui
, so max+1 is added to it. Now, in the next code snippet, I change the type of result to uint_fast16_t
.
#include <stdio.h>
#include <stdint.h>
int main()
{
uint_fast16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("%li", result);
return 0;
}
Now, the result is -1
. What happens here? How can the result be signed?
Upvotes: 0
Views: 150
Reputation: 491
Please see below code(As @Tom Karzes said that uint_fast16_t may be an ordinary unsigned int on some systems. In that case, %lu would be the wrong format.
And @bolov said that should use printf("%" PRIuFAST16 "\n", result);
and printf("%" PRIdFAST16 "\n", (int_fast16_t) result);
So I changed my answer):
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
uint_fast16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("sizeof(result) is %zu\n", sizeof(result));
printf("%" PRIuFAST16 "\n", result);
printf("%" PRIdFAST16 "\n", (int_fast16_t) result);
return 0;
}
Run it will output(on 64-bit computer):
sizeof(result) is 8
18446744073709551615
-1
Why output is different for the same result
? One is %PRIuFAST16
, other is %PRIdFAST16
.
It's because it depend on how you see it, as unsigned
or signed
.
Upvotes: -1
Reputation: 213276
si is converted to ui
Maybe or maybe not, depends on the system. You might be making an over-simplification here and cutting corners when trying to understand this. Check out Implicit type promotion rules regarding the si * u1
expression.
int
, then in si * ui
the signed operand is converted as per "the usual arithmetic conversions" to unsigned int
.int
, then in si * ui
both operands are converted to int
which is signed 32. The result of the operation is -1
. You then store this in an unsigned type in the next step and then the signed integer is converted to an unsigned type.Furthermore, printf
might make an internal conversion if you lie to it and pass a type which doesn't match the conversion specifier. The correct format for these types are:
#include <inttypes.h>
printf("%"PRIu16 "\n", x); // for uint16_t
printf("%"PRIuFAST16 "\n", x); // for uint_fast16_t
So what happened when you changed to a "fast" type? Very likely uint_fast16_t
was replaced with a 32 or 64 bit integer type. In which case result = si * ui;
goes like:
si
and ui
to int
.-1
.printf
and tell it to read the large unsigned number as signed long
even though it is not. This is strictly speaking undefined behavior (wrong conversion specifier), but in practice it seems an implementation-defined conversion from a large unsigned integer to signed long
took place inside printf.Example tried on x86_64 Linux:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main(void)
{
uint_fast16_t result;
uint16_t ui = 1;
int16_t si = -1;
result = si * ui;
printf("result is %zu bytes\n",sizeof(result));
printf("result has value %"PRIuFAST16 "\n", result);
printf("result converted to long: %ld\n", (long)result);
return 0;
}
Output:
result is 8 bytes
result has value 18446744073709551615
result converted to long: -1
And this is just because long
happened to be an 8 byte type too on my system.
Upvotes: 6