user22701105
user22701105

Reputation:

atof() returns inaccurate value

I am trying to convert a string with max.10 characters into a float-value, using the atof-function. Unfortunately it is not working as the function returns an inaccurate value.

Here is my code:


#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define MAX_LENGTH 10
#define BUFFER (MAX_LENGTH + 1)

int validateInput(char input[]);

int main(void)
{
        int return_value;
        char tocheck1[] = "1234567.89";
        return_value = validateInput(tocheck1);
        if (return_value == 0) {
                printf("\nInvalid.");
        } else {
                printf("Valid.");
        }
        return 0;
}


int validateInput(char input[])
{
        float floatvalue;
        char validationstring[BUFFER];
        int result;
        if (strlen(input)>MAX_LENGTH) {
                return 0;
        }
        floatvalue = atof(input);
        printf("\n%f", floatvalue);
        sprintf(validationstring, "%.2f", floatvalue);
        printf("\n%s", validationstring);
        result = strcmp(input, validationstring);
        if (result == 0) {
                return 1;
        } else {
                return 0;
        }
}

For testing-purposes I am printing the float-value and the validationstring and I have identified that the converted string differs from the input-string.

I also tried to use double instead of float but it is still inaccurate.

The terminal-output is as follows:

1234567.875000
1234567.88
Invalid.

Why are the digits of the input-string converted wrong? I understand why sprintf returns the ".88" decimals and thus I understand the "Invalid"-Output. But I don`t understand why in the first step it is not converting correctly.

It is an university-task where I should do it as described and they want me to do a conversion of an input (max. 10 digits), convert it with atof to float-value, convert it back with sprintf and then finally compare it with the initial string (input).

Happy to hear from you, guys. If you need further info, please let me know.

Upvotes: 0

Views: 196

Answers (1)

Eric Postpischil
Eric Postpischil

Reputation: 222437

In the format commonly used for float, the closest representable value to 1,234,567.89 is 1,234,567.875.

The IEEE-754 binary32 format, also called “single precision,” is commonly used for the float type. In this format, numbers are represented as ±f•2e, where f is an integer, 0 ≤ f < 224, and e is an integer, −149 ≤ e ≤ 104. (This is expressible using different scalings for f so that it is not an integer and the range for e is adjusted accordingly so the forms are mathematically equivalent.)

Numbers in this form are encoded using 32 bits:

  • The + or − is encoded in one bit.
  • Of the 24 binary digits of f, 23 are encoded explicitly in 23 bits.
  • The exponent range, −149 ≤ e ≤ 104, spans 254 numbers, so this is encoded in 8 bits. 8 bits can encode 256 numbers, so the other two values are used to represent infinities, NaNs (“Not a Number”), and cases where the leading bit of f is zero.

In this format, the closest representable number to 1,234,567.89 is +9,876,543•2−3 = 1,234,567.875. Since f must be an integer, the next greater representable number is +9876544•2−3 = 1,234,568. (We cannot make the number any finer by using 2−4 instead of 2−3 because then f would have to change to about double of 9,876,543, and that would make it larger than 224.)

So, when atof("1234567.89") returns 1,234,567.875, it is returning the best possible result.

Upvotes: 0

Related Questions