Reputation: 381
I'm using Python's ctypes
library to call my C code. My problem is that when I try to create a c_float
, I seem to obtain a slightly different value to what I set.
For example
print(value)
print(c_float(value))
0.2
c_float(0.20000...298...)
How can I avoid this?
Upvotes: 2
Views: 1218
Reputation: 155156
Python's float
corresponds to a C double, typically 64 bits wide. On the other hand a ctypes c_float
corresponds to a C float
, typically 32 bits wide. This is why you observe a degradation in accuracy - it is expected when representing a number inexactly with a narrower type.
c_float's __repr__
adds to the confusion by converting the number back to a Python float in order to print it, so its output includes additional unnecessary digits. 32-bit floats can be printed using 9 significant digits, while 64-bit floats require 17. The extra digits carry no additional information and make it look like the number has been corrupted.
Upvotes: 1
Reputation: 60167
Jan Rüegg is right - this is just how floats work.
If you're wondering why this only shows up with c_float
, it's because
c_float
s print as "c_float({!r}).format(self.value)
. self.value
is a double-precision Python float.
Python's float
type prints the shortest representation that converts to the floating point number, so although float(0.2)
is not exact it can still be printed in short form.
The inaccuracy in c_float(0.2)
is high enough to make it closer to
0.20000000298023223876953125
than to
0.200000000000000011102230246251565404236316680908203125
Since float(0.2)
represents the later, the short form of "0.2"
cannot be used.
Upvotes: 3
Reputation: 10065
Floats have a fixed precision, and cannot necessarily represent finite numbers in decimal representation exactly.
On this website you can see how the float is actually stored in memory. If you enter "0.2" in "Decimal Representation", you can see how it is converted to a hex number, and that the stored number is exactly the number you noticed.
Upvotes: 0