mxmlnkn
mxmlnkn

Reputation: 2131

Python numpy uint64 gets converted to float on division

I wanted to do a simple integer division e.g. 1/3=0, but uint64 behaves very weirdly, resulting in my result being casted to float. Why?

python> uint64(100)/3
Out[0]: 33.333333333333336
python> uint64(100)/uint64(3)
Out[1]: 33
python> int64(100)/3
Out[2]: 33
python> int64(100)/uint64(3)
Out[3]: 33.333333333333336
python> int32(100)/int64(3)
Out[4]: 33

Upvotes: 1

Views: 1716

Answers (1)

mxmlnkn
mxmlnkn

Reputation: 2131

That's because Python sees a signed and an unsigned type and tries to automatically deduce the result type, which will be signed. But as the first 64-bit number was unsigned the signed version would need 65-bit. As there is no integer type in Python/Numpy higher than 64 bit, Python chooses float64. The standard type i.e. for the divisor 3 is int64, that's why the first example will be cast to float64. This also works with multplications of course:

python> import numpy as np
python> type( np.int64( 10 ) * np.int64( 1 ) )
Out[0]: numpy.int64
python> type( np.uint64( 10 ) * np.uint64( 1 ) )
Out[1]: numpy.uint64
python> type( np.uint64( 10 ) * np.int64( 1 ) )
Out[2]: numpy.float64

Note that this automatic type deduction only applies to different signed types, because it is value agnostic, else almost all types would have to end up as float64, because e.g. after three consecutive multiplications it could be possible that it doesn't fit into uint64 anymore.

type(uint64(12345678900)*uint64(12345678900))
/usr/bin/ipython:1: RuntimeWarning: overflow encountered in ulong_scalars
#! /usr/bin/python
Out[3]: numpy.uint64

Note: Beware that in Python 3 the simple slash is the integer division by default anymore. Instead you would have to use 3 // 2 to get 1 as 3 / 2 == 1.5 in Python 3.

Upvotes: 2

Related Questions