snoopysalive
snoopysalive

Reputation: 68

C (C++, Objective-C) typecast from float to int results in decremented int

I've written an iPhone-App and encountered a problem concerning typecasting between float and int. I've rewritten the code in C and the results always were the same, no matter if having it compiled under OS X (Console and Xcode), Linux (Console) or Windows (Visual Studio):

// Calculation of a page index depending on the amount of pages, its x-offset on the view
// and the total width of the view
#include <stdio.h>

int main()
{
    int   result   = 0;
    int   pagesCnt = 23;
    float offsetX  = 2142.0f;
    float width    = 7038.0f;

    offsetX = offsetX / width;
    offsetX = (float)pagesCnt * offsetX;
    result  = (int)offsetX;

    printf("%f (%d)\n", offsetX, result);
    // The console should show "7.000000 (7)" now
    // But actually I read "7.000000 (6)"

    return 0;
}

Of course, we have a loss of precision here. When doing the math with the calculator x results in 7.00000000000008 and not in just 7.000000.

Nevertheless, as much as I understand C, this shouldn't be a problem as C is meant to truncate the the after-point-positions of a floating point number when converting it to an integer. In this case this is what I want actually.

When using double instead of float, the result would be as expected, but I don't need double precision here and it shouldn't make any difference.

Am I getting something wrong here? Are there situations when I should avoid conversion to int by using (int)? Why is C decrementing result anyway? Am I supposed to always use double?

Thanks a lot!

Edit: I've misformulated something: Of course, it makes a difference when using double instead of float. I meant, that it shouldn't make a difference to (int) 7.000000f or 7.00000000000008.

Edit 2: Thanks to you I understand my mistake now. Using printf() without defining the floating point precision was wrong. (int)ing 6.99... to 6 is correct, of course. So, I'll use double in the future when encountering problems like this.

Upvotes: 3

Views: 2723

Answers (3)

Brett Hale
Brett Hale

Reputation: 22318

The compiler is not free to change the order of floating-point operations in many cases, since finite precision arithmetic is not, in general, associative. i.e.,

(23 * 2142) / 7038 = 7.00000000000000000000

(2142 / 7038) * 23 = 6.99999999999999999979

These aren't single precision results, but it shows that double precision will not address your problem either.

Upvotes: 1

David Alber
David Alber

Reputation: 18111

If you change printf("%f (%d)\n", offsetX, result); to printf("%.8f (%d)\n", offsetX, result); you will see the problem. After making that change, the output is:

6.99999952 (6)

You can correct this by rounding instead of casting to int. That is, change result = (int)offsetX; to result = roundf(offsetX);. Don't forget to #include <math.h>. Now, the output is:

6.99999952 (7)

Upvotes: 1

Jens Gustedt
Jens Gustedt

Reputation: 78903

When I augment the precision for the output, the result that I receive from your program is 6.99999952316284179688 (6), so 6 seems to be correct.

Upvotes: 4

Related Questions