Reputation: 353
I am very new to C. I come from a python background. I would like to know where I went wrong with my code.
I am doing the cs50 greedy problem. What is wrong with my code? It works with some numbers but others don't work. I am trying to get an input from the user asking how much change to give back, then calculate the minimum number of coins I can give back using only $.25, $.10, $.05, $.01
#include <cs50.h>
#include <stdio.h>
int main(void)
{
float n;
do
{
n = get_float("How much change is owed?\n");
}
while(n == EOF);
int minimumamountofcoins = 0;
if (n/.25 >=1){
do
{
n -= .25;
minimumamountofcoins++;
}
while (n/.25 >= 1);
}
if (n/.1 >=1){
do
{
n -= .1;
minimumamountofcoins++;
}
while (n/.1 >=1);
}
if(n/.05 >=1){
do
{
n -= .05;
minimumamountofcoins++;
}
while (n/.05 >=1);
}
if (n/.01 >=1){
do
{
n -= .01;
minimumamountofcoins++;
}
while (n/.01 >=1);
}
printf("The minimum amount of coins is %d\n", minimumamountofcoins);
}
New code: (works perfectly except when entering 4.2)
#include <cs50.h>
#include <stdio.h>
int main(void)
{
float n;
do
{
n = get_float("How much change is owed?\n");
}
while(n == EOF);
int cents = (int)(n * 100);
int minimumamountofcoins = 0;
if (cents/25 >= 1){
while (cents/25 >= 1)
{
cents -= 25;
minimumamountofcoins++;
}
}
if (cents/10 >= 1){
while (cents/10 >= 1)
{
cents -= 10;
minimumamountofcoins++;
}
}
if(cents/5 >= 1){
while (cents/5 >= 1)
{
cents -= 5;
minimumamountofcoins++;
}
}
if (cents/1 >= 1){
while (cents/1 >= 1)
{
cents -= 1;
minimumamountofcoins++;
}
}
printf("The minimum amount of coins is %d\n", minimumamountofcoins);
}
Upvotes: 2
Views: 687
Reputation: 863
Since you did not include test cases, I did my own. Here are some of the cases for which your algorithm does not return the correct answer:
.04, .11, .17, .19, .21, .26, .32, etc.
These cases all fail when calculating the number of pennies (in the final do-while loop), and they all return one less coin then they should. This is because of error with floating-point numbers. With print statements, I discovered that when the division for the final penny was being calculated, the same thing occurred each time:
n/.01 = 0.99999999
This is obviously not intended, as this should be equal to 1, and the final penny should be added to the total. Thus, your code, which works in theory, is broken because of floating-point numbers.
To avoid this, you could do any number of things, such as keeping track of dollars and cents separately as integers, change the condition to be n/.01 >= .9999
instead of n/.01 >= 1
, treat the amount of money you are doing the calculations on as an integer number of cents, or any other number of things.
Personally, I prefer that last option, treating the amount of money as an integer number of cents. This is easy to do, as all you have to do to convert from dollars to cents is multiply by 100. Thus, the easiest thing to do is to use this same algorithm, except use integers.
Thus, the code would look something like this:
int main(){
float n;
//code that parses in the amount of money, n now stores that amount
int cents = (int)(n * 100);
int minimumamountofcoins = 0;
if (cents/25 >= 1){
while (cents/25 >= 1)
{
cents -= 25;
minimumamountofcoins++;
}
}
if (cents/10 >= 1){
while (cents/10 >= 1)
{
cents -= 10;
minimumamountofcoins++;
}
}
if(cents/5 >= 1){
while (cents/5 >= 1)
{
cents -= 5;
minimumamountofcoins++;
}
}
if (cents/1 >= 1){
while (cents/1 >= 1)
{
cents -= 1;
minimumamountofcoins++;
}
}
printf("The minimum amount of coins is %d\n", minimumamountofcoins);
}
Upvotes: 2