ewzuhnaem
ewzuhnaem

Reputation: 33

Why do you need to use both fixed and showpoint manipulators to show decimal point and trailing zeros when only fixed does the job?

I understand what the fixed and showpoint manipulators do on their own, but why would you want to use them both at the same time? This is what DS Malik's C++ Programming textbook says:

Of course, the following statement sets the output of a floating-point number in a fixed decimal format with the decimal point and trailing zeros on the standard output device:
cout << fixed << showpoint;

But doesn't fixed alone set the output to be "fixed decimal format with the decimal point and trailing zeros"?

I tried running my own experiment:

#include <iostream>
#include <iomanip>
using namespace std;
int main() {
    cout << setprecision(3);
    cout << "only fixed" << "\t" << "fixed + showpoint" << endl;
    for (double i = 0.000001; i <= 100000; i *= 10) {
        cout << fixed     << i << "\t\t";
        cout << showpoint << i << endl;
        cout.unsetf(ios::fixed);     
        cout.unsetf(ios::showpoint);
    }
}

But I fail to see the difference between using only fixed and using fixed and showpoint together, in that order:

only fixed      fixed + showpoint
0.000           0.000
0.000           0.000
0.000           0.000
0.001           0.001
0.010           0.010
0.100           0.100
1.000           1.000
10.000          10.000
100.000         100.000
1000.000                1000.000
10000.000               10000.000
100000.000              100000.000

Any insight would be greatly appreciated!

Upvotes: 3

Views: 2367

Answers (2)

ytlu
ytlu

Reputation: 412

In your case, the showpoint flag is no effects. Since all printouts had 3 digits after dicimal point, and had decimal points shown. But in some cases, these two flags can give different results. For exmaple, if you set precision(0) in you code. The decimal point will not shown in the case of fixed.

    cout << setprecision(0);

print out:

      only fixed   fixed + showpoint
               0                  0.
               0                  0.
               0                  0.
               0                  0.
               0                  0.
               0                  0.
               1                  1.
              10                 10.
             100                100.
            1000               1000.
           10000              10000.
          100000             100000.

Upvotes: 2

Christophe
Christophe

Reputation: 73465

With fixed you set a number of decimals that you want to see in all cases. So you will always see the decimal point. Combining with showpoint makes no difference, unless...

Unless... You setprecision(0), in which case you'll never see a decimal and hence no decimal point. In this case showpoint will have the effect of showing a trailing decimal separator with nothing behind:

               0                  0.
               1                  1.
              10                 10.
             100                100.

Now, is this combination useful? I doubt it makes sense. So, no, there is no reason to combine those two.

Upvotes: 4

Related Questions