stanely
stanely

Reputation: 352

gmpy2 mpfr values are limited to 301,033 digits of which the 301,032 digits are correct

I am writing a Python3 (64-bit) program to calculate pi to at least a million digits using the gmpy2 module on 64-bit Windows 10. I'm using the Chudnovsky algorithm. The series arithmetic is genius, easy to follow and straightforward to implement. The problem I'm having is representing very large precision real numbers with gmpy2.

I boiled this problem down to a very simple test.

  1. Check the gmpy2 system limits. They look great.
  2. Set precision context as needed for a million digit pi.
  3. Assign a string variable with a million digit pi.
  4. Assign an mpfr variable the string conversion using mpfr('million digit pi')
  5. Print the string variable, it has a million digits.
  6. Print the mpfr it has a little over 300,000 digits.

When I convert the mpfr to string and print, I get similar results.

What's going on here, and how do I squeeze a million digits precision out of gmpy2?

    import gmpy2                # Get the gmpy2 library

    print('max_precision =', gmpy2.get_max_precision())
    print('emin_min =', gmpy2.get_emin_min())
    print('emax_max =', gmpy2.get_emax_max())

    gmpy2.get_context().precision = 1000000 + 3   # Set the precision context.
    print(gmpy2.get_context())

    million_string = 'million digits of pi. I didn't paste it here but you can get it from Eve Andersson's website here, http://www.eveandersson.com/pi/digits/1000000, you'll need to strip out the newlines'

    print('string =', million_string)

    million_pi = gmpy2.mpfr(million_string)

    print('\n\nHere comes the mpfr...\n\n')
    print(million_pi)

In my output I get the printed max, mins, context, the million digit pi string and about the first 300,000 (give or take) digits of pi. I'd love to show you exactly the output, but the system accuses my nicely formatted output to be code and won't let me do it. I can't figure out a way around it.

Upvotes: 3

Views: 583

Answers (1)

casevh
casevh

Reputation: 11424

The precision is specified in bits and not in decimal digits. log(10)/log(2) bits are required per decimal digit. The 1,000,003 bits will produce 1000003*log(2)/log(10) or roughly 301,030.9 accurate decimal digits.

Upvotes: 4

Related Questions