Reputation: 352
I am writing a Python3 (64-bit) program to calculate pi to at least a million digits using the gmpy2 module on 64-bit Windows 10. I'm using the Chudnovsky algorithm. The series arithmetic is genius, easy to follow and straightforward to implement. The problem I'm having is representing very large precision real numbers with gmpy2.
I boiled this problem down to a very simple test.
When I convert the mpfr to string and print, I get similar results.
What's going on here, and how do I squeeze a million digits precision out of gmpy2?
import gmpy2 # Get the gmpy2 library
print('max_precision =', gmpy2.get_max_precision())
print('emin_min =', gmpy2.get_emin_min())
print('emax_max =', gmpy2.get_emax_max())
gmpy2.get_context().precision = 1000000 + 3 # Set the precision context.
print(gmpy2.get_context())
million_string = 'million digits of pi. I didn't paste it here but you can get it from Eve Andersson's website here, http://www.eveandersson.com/pi/digits/1000000, you'll need to strip out the newlines'
print('string =', million_string)
million_pi = gmpy2.mpfr(million_string)
print('\n\nHere comes the mpfr...\n\n')
print(million_pi)
In my output I get the printed max, mins, context, the million digit pi string and about the first 300,000 (give or take) digits of pi. I'd love to show you exactly the output, but the system accuses my nicely formatted output to be code and won't let me do it. I can't figure out a way around it.
Upvotes: 3
Views: 583
Reputation: 11424
The precision is specified in bits and not in decimal digits. log(10)/log(2)
bits are required per decimal digit. The 1,000,003 bits will produce 1000003*log(2)/log(10)
or roughly 301,030.9 accurate decimal digits.
Upvotes: 4