Sounak
Sounak

Reputation: 4968

Why typecasting a double to int seems to round it after 16th digit?

//g++  5.4.0

#include <iostream>

int main()
{
    std::cout << "Hello, world!\n";
    std::cout << (int)0.9999999999999999 << std::endl; // 16 digits after decimal
    std::cout << (int)0.99999999999999999 << std::endl; // 17 digits after decimal
}

Output:

Hello, world!
0
1

Why does this happen?

Upvotes: 1

Views: 143

Answers (2)

user10543939
user10543939

Reputation:

The most accurate representation of 0.99999999999999999 is 1.0.1)
The most accurate representation of 0.9999999999999999 is 0.999999999999999888977697537484.

1) In 64-bit double precision IEEE754 floating point representation.

Since there is no rounding but truncation, one gives 1 and the other gives 0 when converted to an integer type.

Upvotes: 3

user8470759
user8470759

Reputation:

The most accurate floating point representation of the value 0.99999999999999999 (17 digits after the decimal) is exactly 1.0.

The most accurate floating point representation of the value 0.9999999999999999 (16 digits after the decimal) is less than 1.0.

The conversion to int truncates one to 0 and the other to 1.

Upvotes: 1

Related Questions