NathanBitTheMoon
NathanBitTheMoon

Reputation: 29

How can I get a more accurate result when dividing numbers in C++

I am trying to estimate PI using C++ as a fun math project. I've run into an issues where I can only get it as precise as 6 decimal places.

I have tried using a float instead of a double but found the same result.

My code works by summing all the results of 1/n^2 where n=1 through to a defined limit. It then multiplies this result by 6 and takes the square root. Here is a link to an image written out in mathematical notation

Here is my main function. PREC is the predefined limit. It will populate the array with the results of these fractions and get the sum. My guess is that the sqrt function is causing the issue where I cannot get more precise than 6 digits.

int main(int argc, char *argv[]) {
    nthsums = new float[PREC];

    for (int i = 1; i < PREC + 1; i += 1) {
        nthsums[i] = nth_fraction(i);
    }

    float array_sum = sum_array(nthsums);
    array_sum *= 6.000000D;

    float result = sqrt(array_sum);
    std::string resultString = std::to_string(result);

    cout << resultString << "\n";
}

Just for the sake of it, I'll also include my sum function as I suspect that there could be something wrong with that, too.

float sum_array(float *array) {
    float returnSum = 0;
    for (int itter = 0; itter < PREC + 1; itter += 1) {
        if (array[itter] >= 0) {
            returnSum += array[itter];
        }
    }
    
    return returnSum;
}

I would like to get at least as precise as 10 digits. Is there any way to do this in C++?

Upvotes: 0

Views: 1019

Answers (1)

Nathan Pierson
Nathan Pierson

Reputation: 5565

So even with long double as the floating point type used for this, there's some subtlety required because adding two long doubles of substantially different order of magnitudes can cause precision loss. See here for a discussion in Java but I believe it to be basically the same behavior in C++.

Code I used:

#include <iostream>

#include <cmath>
#include <numbers>

long double pSeriesApprox(unsigned long long t_terms)
{
    long double pi_squared = 0.L;
    for (unsigned long long i = t_terms; i >= 1; --i)
    {
        pi_squared += 6.L * (1.L / i) * (1.L / i);
    }
    return std::sqrtl(pi_squared);
}

int main(int, char[]) {
    const long double pi = std::numbers::pi_v<long double>;
    const unsigned long long num_terms = 10'000'000'000;

    std::cout.precision(30);
    std::cout << "Pi == " << pi << "\n\n";
    std::cout << "Pi ~= " << pSeriesApprox(num_terms) << " after " << num_terms << " terms\n";

    return 0;
}

Output:

Pi == 3.14159265358979311599796346854
Pi ~= 3.14159265349430016911469465413 after 10000000000 terms

9 decimal digits of accuracy, which is about what we'd expect from a series converging at this rate.

But if all I do is reverse the order the loop in pSeriesApprox goes, adding the exact same terms but from largest to smallest instead of smallest to largest:

long double pSeriesApprox(unsigned long long t_terms)
{
    long double pi_squared = 0.L;
    for (unsigned long long i = 1; i <= t_terms; ++i)
    {
        pi_squared += 6.L * (1.L / i) * (1.L / i);
    }
    return std::sqrtl(pi_squared);
}

Output:

Pi == 3.14159265358979311599796346854
Pi ~= 3.14159264365071688729358356795 after 10000000000 terms

Suddenly we're down to 7 digits of accuracy, even though we used 10 billion terms. In fact, after 100 million terms or so, the approximation to pi stabilizes at this specific value. So while using sufficiently large data types to store these computations is important, some additional care is still needed when trying to perform this kind of sum.

Upvotes: 2

Related Questions