Reputation: 169
I had the following function written in Python as part of a larger simulation:
#!/usr/bin/python
counter = 1
while (counter < 10000):
oldpa = .5
t = 1
while (t < counter):
newpa = ((oldpa * t) + 1) / (t + 1)
t = t + 1
oldpa = newpa
counter = counter + 1
print str(counter) + "\t" + str(oldpa)
Then, I started rewriting the simulation in C so that it would run faster (and also to give myself an excuse to spend time learning C). Here's my C version of the above function.
#include <stdio.h>
main()
{
int counter, t;
float oldpa, newpa;
counter = 1;
while ( counter < 10000 )
{
oldpa = .5;
t = 1;
while ( t < counter )
{
newpa = ((oldpa * t) + 1) / (t + 1);
t = t + 1;
oldpa = newpa;
}
counter = counter + 1;
printf("%d\t%f\n", counter, oldpa);
}
}
Now, here is the funny thing. When I run the Python function, the result converges to 0.999950, but when I run the C function, it converges to 0.999883. This difference is actually negligible for the purposes of my simulation, but I still want to know why I get different results
Upvotes: 2
Views: 189
Reputation: 42805
Floating-point values in Python are almost always IEEE-754 double precision, corresponding to a C or C++ double
. If you want a lot more precision, check out the decimal module.
Upvotes: 3