Reputation: 3481
I need to determine the standard deviation of a set of inputs. This requires the summation of (each input - the mean)^2. I have the following code to do this (510 is an example input for testing):
int testDiff = 510;
console.printf("testDiff = %i \r\n",testDiff);
double testDiff_double = static_cast<double>(testDiff);
console.printf("testDiff_double = %d \r\n",testDiff_double);
double result = pow(static_cast<double>(testDiff),2);
console.printf("result = %d \r\n",result);
However, this code does not generate the expected output (260100). If I print the values obtained at each stage, I get the following:
testDiff = 510
testDiff_double = 0
pow = 0
Why is the conversion from integer to double failing? I am using a static_cast, so I was expecting an error at compile time if there was a logical issue with the conversion.
Note (while it shouldn't matter): I'm running this code on an MBED microcontroller.
Upvotes: 0
Views: 185
Reputation: 12321
You have to use %f
or %e
or %E
or %g
in order to display a double/float number.
From the printf
reference page :
c Character
d or i Signed decimal integer
e Scientific notation (mantise/exponent) using e character
E Scientific notation (mantise/exponent) using E character
f Decimal floating point
g Use the shorter of %e or %f
G Use the shorter of %E or %f
o Unsigned octal
s String of characters
u Unsigned decimal integer
x Unsigned hexadecimal integer
X Unsigned hexadecimal integer (capital letters)
p Pointer address
n Nothing printed. The argument must be a pointer to a signed int, where the number of characters written so far is stored.
% A % followed by another % character will write % to stdout.
Upvotes: 1
Reputation: 272467
Because you are using %d
rather than %f
to display the floating-point values.
Upvotes: 2