Reputation:
I have not encountered any problems thus far, so this is a question purely out of curiosity.
In Python I usually define floats and arrays of floats like this:
import numpy as np
s = 1.0
v = np.array([1.0, 2.0, 3.0])
In the case above s
is a float
, but the elements of v
are of type numpy.float64
.
To be more consistent I could, for example, do this instead:
import numpy as np
s = np.float64(1.0)
v = np.array([1.0, 2.0, 3.0])
Are there cases, from an accuracy/precision point of view, where it is recommended to use the "consistent" approach? What kind of errors, if any, can I expect in the "inconsistent" approach?
Upvotes: 5
Views: 8567
Reputation: 152647
Python (at least CPython) uses doubles for it's float
type internally - and doubles are 64bit floats (maybe not always but I haven't found a platform + compiler where doubles weren't 64bit floats).
So you shouldn't expect any kind of problems no matter if you keep them as float
or np.float64
.
However if you use Pythons float
and NumPys np.float32
you could expect differences as the float
has more precision (64 bits) than a np.float32
(32 bits).
Upvotes: 3