Reputation: 77
I need help to solve a really simple problem. I wrote code that adds two numbers, but only in type float
. So when I write 2+2
, it gives me 4.0
. What do I need to do in order to get only 4
? But in the same time if I write 5.2+5.3
to get 10.5
?
It's homework, and should not include an if
statement.
I tried all variable types, but it just gives me unrealistic numbers. I would really appreciate if someone helped.
Code
#include <stdio.h>
/*Addition of two numbers*/
int main()
{
float a;
float b;
float x;
printf("Enter the first number:\n");
scanf("%f", &a);
printf("Enter the second number:\n");
scanf("%f", &b);
x = a + b;
/*Printing decimal number*/
printf("Result: %.1f + %.1f = %.1f", a, b, x);
return 0;
}
Upvotes: 5
Views: 272
Reputation: 51825
If you want to be 'smart' (and I use that term loosely, rather than advisedly - and neither am I trying to detract anything from the great answer posted by @Sebi), and format your output according to the (maximum) precision of the input(s), then this will work for 'fractional' parts that aren't zero:
#include <stdio.h>
int main()
{
float a, b, x, test;
int aPrec = 0, bPrec = 0, xPrec = 0, iTest;
printf("Enter the first number:\n");
scanf("%f", &a);
test = (float)fabs(a); // Just in case it's -ve!
iTest = (int)(test + 0.5);
while (test != (float)(iTest)) {
++aPrec;
test *= 10.0;
iTest = (int)(test + 0.5);
}
printf("Enter the second number:\n");
scanf("%f", &b);
test = (float)fabs(b); // Just in case it's -ve!
iTest = (int)(test + 0.5);
while (test != (float)(iTest)) {
++bPrec;
test *= 10.0;
iTest = (int)(test + 0.5);
}
x = a + b;
xPrec = aPrec > bPrec ? aPrec : bPrec; // Or, you could use max()!!
// The "*" precision specifiers get their values from arguments immediately …
/// … preceding the relevant 'float' value!
printf("Result: %.*f + %.*f = %.*f", aPrec, a, bPrec, b, xPrec, x);
return 0;
}
What the code is doing, is to keep multiplying each input by 10
until it sees that this 'test' value is then equal to its integral (truncated) value; this is the "precision" value for each input, a
and b
. Now, being smart
(again, I use the term loosely), it thinks that the output precision should be the greater of the two inputs.
If you don't know how the "%.*f
format specifier works, I can add more details, of course.
Upvotes: 1
Reputation: 588
I think what you want to do is to use the g
format specifier for the printf
instruction. This is used for printing the shortest representation. You can read more about format specifiers here: http://www.cplusplus.com/reference/cstdio/printf/
This code prints how you described: 5 + 5 = 10
and 5.1 + 5.2 = 10.3
#include <stdio.h>
int main()
{
float a;
float b;
float x;
printf("Enter the first number:\n");
scanf("%f", &a);
printf("Enter the second number:\n");
scanf("%f", &b);
x = a + b;
/*Printing decimal number*/
printf("Result: %.1f + %.1f = %g\n", a, b, x);
return 0;
}
Upvotes: 10