Reputation: 422
I am trying to plot a set of extreme floating-point values that require high precision. It seems to me there are precision limits in matplotlib. It cannot go further than the scale of 1e28.
This is my code for displaying a graph.
import matplotlib.pyplot as plt
import numpy as np
x = np.array([1737100, 38380894.5188064386003616016502, 378029000.0], dtype=np.longdouble)
y = np.array([-76188946654889063420743355676.5, -76188946654889063419450832178.0, -76188946654889063450098993033.0], dtype=np.longdouble)
plt.scatter(x, y)
#coefficients = np.polyfit(x, y, 2)
#poly = np.poly1d(coefficients)
#new_x = np.linspace(x[0], x[-1])
#new_y = poly(new_x)
#plt.plot(new_x, new_y)
plt.xlim([x[0], x[-1]])
plt.title('U vs. r')
plt.xlabel('Distance r')
plt.ylabel('Total gravitational potential energy U(r)')
plt.show()
I am expecting the middle point to be located higher than the other two points. It requires very high precision. How can I configure it?
Upvotes: 1
Views: 2195
Reputation: 69182
Your current issue is likely not with matplotlib but with np.longdouble
. To discover whether this is the case, run np.finfo(np.longdouble)
. This will be machine dependent, but on my machine, this says I'm using a float128
with the following description
Machine parameters for float128
---------------------------------------------------------------
precision = 18 resolution = 1.0000000000000000715e-18
machep = -63 eps = 1.084202172485504434e-19
negep = -64 epsneg = 5.42101086242752217e-20
minexp = -16382 tiny = 3.3621031431120935063e-4932
maxexp = 16384 max = 1.189731495357231765e+4932
nexp = 15 min = -max
---------------------------------------------------------------
The precision is just an estimate (due to binary vs decimal representation), but 18 digits is the float128 limit, and your specific numbers only start to become interesting after that.
An easy test is to print y[1]-y[0]
and see if you get something other than 0.0
.
An easy solution is to use Python int
s since you'd capture most of the difference (or int
of 10*y
) since Python has infinite precision int
s. So something like this:
x = np.array([1737100, 38380894.5188064386003616016502, 378029000.0], dtype=np.longdouble)
y = [-76188946654889063420743355676, -76188946654889063419450832178, -76188946654889063450098993033]
plt.scatter(x, [z-y[0] for z in y])
Another solution is to represent the numbers from the start so that they require a more accessible precision (ie, with most of the offset removed). And another is to use a high precision float library. It depends on which way you want to go.
It's also worth noting that, at least for my system which I think is typical, the default np.float
is float64
. For float64
the floating point mantisaa is 52 bits, whereas for float128
it's only 63 bits. Or in decimal, from about 15 digits to 18. So there's not a great precision increase for going from np.float
to np.float128
. (Here's a discussion of why np.longdouble
( or np.float128
) sounds like it's going to add a lot of precision, but doesn't.)
(Finally, because this may cause confusion for some, if it were the case that np.longdouble
or np.float128
were useful for this problem, it's worth noting that the line in the question that sets the initial array wouldn't give the intended precision of np.longdouble
. That is, y=np.array( [-76188946654889063420743355676.5], dtype=np.longdouble)
first creates and array of Python floats, and then creates the numpy array from that, but the precision will be lost in the Python array. So if longdouble
were the solution, a different approach to initializing the array would be needed.)
Upvotes: 1