alcatraz
alcatraz

Reputation: 43

Why is float outputting a different result than double?

I have a program in which if I use floats it outputs 29.325001, but if I use double it's 29.325000 (that's the right result).

I thought that float rounds up to 14 decimal places, but it's producing a different result than expected?

Code below:

    int elements, i;

scanf("%d", &elements);

if (elements > 10){
    return 0;
}

float ppp[10], takeHowMuch[10], total;

for (i = 0; i < elements; i++){
    scanf("%f", &ppp[i]);
}

for (i = 0; i < elements; i++){
    scanf("%f", &takeHowMuch[i]);
}

total = (takeHowMuch[0] * ppp[0]) + (takeHowMuch[1] * ppp[1]) + (takeHowMuch[2] * ppp[2]) + (takeHowMuch[3] * ppp[3]);

printf("%.6f", total);

return 0;

Upvotes: 1

Views: 57

Answers (1)

chux
chux

Reputation: 154572

"I thought that float rounds up to 14 decimal places,"

Code precision expectations are too high.

Typical float shown as decimal encodes as expected to at least 6 leading significant digits. 29.325001 is 8.

Use FLT_DIG

// printf("%.6f", total);
printf("%.*e\n", FLT_DIG - 1, total);
printf("%.*g\n", FLT_DIG, total);

Output

2.93250e+01
29.325

Upvotes: 1

Related Questions