Lod
Lod

Reputation: 90

Where is the error in Chudnovsky algorithm (python)?

I'm new to python (and coding too) so I'm following a tutorial book. I'm try to calculate pi to a set number of decimal places using the Chudnovsky algorithm from code outlined in the book; however, when I execute the code I get an error saying:

> File "C:/Users/user/Documents/Python/Scripts/Tutorials/Calculating
> pi.py", line 15, in calc
> t = (Decimal(-1)**k)*(math.factorial(Decimal(6)*k))*(13591409 + 545140134*k) TypeError: 'decimal.Decimal' object cannot be interpreted
> as an integer

Here is the original code:

from decimal import Decimal, getcontext
import math
    
    numberofdigits = int(input("please enter the number of decimal places to calculate Pi to: "))
    getcontext().prec = numberofdigits
    
    def calc(n):
        t = Decimal(0)
        pi = Decimal(0)
        deno = Decimal(0)
        k = 0
        for k in range(n):
            t = (Decimal(-1)**k)*(math.factorial(Decimal(6)*k))*(13591409+545140134*k)
            deno = math.factorial(3*k)*(math.factorial(k)**Decimal(3))*(640320**(3*k))
            pi += Decimal(t)/Decimal(deno)
        pi = pi * Decimal(12)/Decimal(640320**Decimal(1.5))
        pi = 1/pi
        return str(pi)
    
    
    print (calc(1))

Where am I going wrong here? I have triple checked for spelling errors etc. but have not found anything but don't really understand what the decimal.decimal type error means.

EDIT: I've been playing around with it and found that if I separate the terms of the numerator I get:

 def calc(n): 
    t = Decimal(0) 
    pi = Decimal(0) 
    deno = Decimal(0) 
    k = 0 for k in range(n): 
    u=(Decimal(-1)**k) 
    x=(Decimal(6)*k) 
    v=math.factorial(x) 
    w=(13591409+545140134*k) 
    t = u*v*w 
    deno = math.factorial(3*k)*(math.factorial(k)**Decimal(3))*(640320**(3*k)) 

This gives me the following error:

line 17, in calc v=math.factorial(x) TypeError: 'decimal.Decimal' object cannot be interpreted as an integer

Cheers

Upvotes: 3

Views: 244

Answers (1)

adamgy
adamgy

Reputation: 5603

The problem seems to be that the math.factorial() function accepts only integers or floats with integral values, but does not support Decimal objects:

print(math.factorial(6))
# 720
print(math.factorial(6.0))
# 720
print(math.factorial(Decimal(6)))
# TypeError: 'decimal.Decimal' object cannot be interpreted as an integer

Changing the value passed to math.factorial() on line 15 should fix the error:

t = (Decimal(-1)**k) * (math.factorial(6 * k)) * (13591409+545140134 * k)

Interestingly, your original code works fine using Python 3.6.9, but fails in Python 3.8.2, because this behavior was introduced in Python 3.8 (yes, this is the intended behavior).

The logic behind this behavior can be fully understood by reading this discussion about dropping support for Decimal objects in math.factorial():

Issue 33083: math.factorial accepts non-integral Decimal instances

Upvotes: 2

Related Questions