nonopolarity
nonopolarity

Reputation: 150996

In C and Objective C, should we almost never cast a float or double value to integer just using (int)?

I just ran into a situation in Objective-C where:

NSLog(@"%i", (int) (0.2 * 10));         // prints 2
NSLog(@"%i", (int) ((1.2 - 1) * 10));   // prints 1

so I wonder, if the value is a float or double, and we want an integer, should we never just use (int) to do the casting, but use (int) round(someValue)? Or, to flip the question around, when should we just use (int), but in those situations, can't (int) round(someValue) can also do the job, so we should almost always use (int) round(someValue)?

Upvotes: 5

Views: 656

Answers (3)

gnasher729
gnasher729

Reputation: 52538

Conversion to int rounds a float or double value towards zero.

1.1 -> 1, 1.99999999999 -> 1, 2.0 -> 2

-1.1 -> -1, -1.999999999 -> -1, -2 -> -2

Is that what you want? If yes, that's what you should do. If not, what do you want?

Floating-point arithmetic always gives rounding errors. So for example 0.2 * 10 will give a number that is close to 2. Might be a little bit less, or a little bit more, or by pure chance it might be exactly 2. Therefore (int) (0.2 * 10) might be 1 or 2, because "a little bit less than 2" will be converted to 1.

round (x) will round to the nearest integer. Again, if you calculate round (1.4 + 0.1), the sum of 1.4 and 0.1 is some number very close to 1.5, maybe a bit less, maybe a bit more, so you don't know if it gets rounded to 1.0 or 2.0.

Would you want all numbers from 1.5 to 2.5 to be rounded to 2? Use (int) round (x). You might get a slightly different result if x is 1.5 or 2.5. Maybe you want numbers up to 1.9999 to be rounded down, but 1.99999999 rounded to 2. Use double, not float, and calculate (int) (x + 0.000001).

Upvotes: 0

Richard J. Ross III
Richard J. Ross III

Reputation: 55553

It depends on what you want. Obviously, a straight cast to int will be faster than a call to round, whereas round will give out more accurate values.

Unless you are doing code that relies upon speed to be effective (in which case, floating point values might not be the best to use, either), I would say that it's worth it to call round. Even if it only changes something you display on-screen by one pixel, when dealing with certain things (angle measure, colors, etc.) The more accuracy you can have the better.

EDIT: Simple test to back up my claim of casting being faster than rounding:

Tested on Macbook Pro:

  • 2.8 GHz Intel Core 2 Duo
  • Mac OS 10.7.4
  • Apple LLVM 3.1 Compiler
  • -O0 (no optimization)

Code:

int value;
void test_cast()
{
    clock_t start = clock();
    value = 0;
    for (int i = 0; i < 1000 * 1000; i++)
    {
        value += (int) (((i / 1000.0) - 1.0) * 10.0);
    }

    printf("test_cast: %lu\n", clock() - start);
}

void test_round()
{
    clock_t start = clock();
    value = 0;
    for (int i = 0; i < 1000 * 1000; i++)
    {
        value += round(((i / 1000.0) - 1.0) * 10.0);
    }

    printf("test_round: %lu\n", clock() - start);
}

int main()
{
    test_cast();
    test_round();
}

Results:

test_cast: 11895
test_round: 14353

Note: I know that clock() isn't the best profiling function, but it does show that round() at least uses more CPU cycles.

Upvotes: 1

Eric Postpischil
Eric Postpischil

Reputation: 222640

The issue here is not converting floating-point values to integer but the rounding errors that occur with floating-point numbers. When you use floating-point, you should understand it well.

The common implementation of float and double use IEEE 754 binary floating-point values. These values are represented as a significand multiplied by a power of two multiplied by a sign (+1 or -1). For floats, the significand is a 24-bit binary numeral, with one bit before the “decimal point” (or “binary point” if you prefer). E.g., the number 1.25 has a significand of 1.01000000000000000000000. For doubles, the significand is a 53-bit binary numeral, with one bit before the point.

Because of this, the values of the decimal numerals .1 and 1.1 cannot be exactly represented. They must be approximated. When you write “.1” or “1.1” in source code, the compiler will convert it to a double that is very near the actual value. Sometimes that result will be slightly greater than the actual value. Sometimes it will be slightly lower than the actual value.

When you convert float to int, the result is (by definition) only the integer portion of the value (the value is truncated toward zero). So, if your value was slightly greater than a positive integer, you get the integer. If your value was slightly less than a positive integer, you get the next lower integer.

If you expect that exact mathematics would give you an integer results, and the floating-point operations you are performing are so few and so simple that the errors have not accumulated to much, then you can round the floating-point value to an integer, using the round function. In simple situations, you can also round by adding .5 before truncation, as by writing “(int) (f + .5)”.

Upvotes: 3

Related Questions