Guillaume Paris
Guillaume Paris

Reputation: 10539

double and stringstream formatting

double val = 0.1;
std::stringstream ss;
ss << val;
std::string strVal= ss.str();

In the Visual Studio debugger, val has the value 0.10000000000000001 (because 0.1 can't be represented). When val is converted using stringstream, strVal is equal to "0.1". However, when using boost::lexical_cast, the resulting strVal is "0.10000000000000001".

Another example is the following:

double val = 12.12305000012;

Under visual studio val appears as 12.123050000119999, and using stringstream and default precision (6) it becomes 12.1231. I don't really understand why it is not 12.12305(...).

Is there a default precision, or does stringstream have a particular algorithm to convert a double value which can't be exactly represented?

Thanks.

Upvotes: 19

Views: 54660

Answers (4)

nickolayratchev
nickolayratchev

Reputation: 1206

You can change the floating-point precision of a stringstream as follows:

double num = 2.25149;
std::stringstream ss(stringstream::in | stringstream::out);
ss << std::setprecision(5) << num << endl;
ss << std::setprecision(4) << num << endl;

Output:

2.2515
2.251

Note how the numbers are also rounded when appropriate.

Upvotes: 23

YuZ
YuZ

Reputation: 445

For anyone who gets "error: ‘setprecision’ is not a member of ‘std’" you must #include <iomanip> else setprecision(17) will not work!

Upvotes: 18

James Kanze
James Kanze

Reputation: 153919

There are two issues you have to consider. The first is the precision parameter, which defaults to 6 (but which you can set to whatever you like). The second is what this parameter means, and that depends on the format option you are using: if you are using fixed or scientific format, then it means the number of digits after the decimal (which in turn has a different effect on what is usually meant by precision in the two formats); if you are using the default precision, however (ss.setf( std::ios_base::fmtflags(), std::ios_base::formatfield ), it means the number of digits in the output, regardless of whether the output was actually formatted using scientific or fixed notation. This explains why your display is 12.1231, for example; you're using both the default precision and the default formattting.

You might want to try the following with different values (and maybe different precisions):

std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield );
std::cout << "default:    " << value[i] << std::endl;
std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield );
std::cout << "fixed:      " << value[i] << std::endl;
std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield );
std::cout << "scientific: " << value[i] << std::endl;

Seeing the actual output will probably be clearer than any detailed description:

default:    0.1
fixed:      0.100000
scientific: 1.000000e-01

Upvotes: 5

David Hammen
David Hammen

Reputation: 33116

The problem occurs at the stream insertion ss << 0.1; rather than at the conversion to string. If you want non-default precision you need to specify this prior to inserting the double:

ss << std::setprecision(17) << val;

On my computer, if I just use setprecision(16) I still get "0.1" rather than "0.10000000000000001". I need a (slightly bogus) precision of 17 to see that final 1.

Addendum
A better demonstration arises with a value of 1.0/3.0. With the default precision you get a string representation of "0.333333". This is not the string equivalent of a double precision 1/3. Using setprecision(16) makes the string "0.3333333333333333"; a precision of 17 yields "0.33333333333333331".

Upvotes: 4

Related Questions