sten
sten

Reputation: 7486

Wrong casting to int32 ? expecting uint16 instead?

Why the in the second case numpy does wrong casting to int32 ?? I have to use second case, because i use a variable..

In [24]: np.zeros(10,dtype=np.uint16) - 1
Out[24]: array([65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535], dtype=uint16)



In [23]: np.zeros(10,dtype=np.uint16) + (-1)
Out[23]: array([-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], dtype=int32)

In [26]: np.zeros(10,dtype=np.uint16) + np.uint16(-1)
Out[26]: array([65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535], dtype=uint16)

Upvotes: 0

Views: 265

Answers (1)

Jacques Gaudin
Jacques Gaudin

Reputation: 16998

This is not necessarily a wrong casting, numpy is trying to find the smallest type that can handle all the possible results.

You can find out what type the result is going to be with np.result_type:

np.result_type(np.uint16, -1)

>>> int32

np.result_type(np.uint16, 1)

>>> uint16

With np.zeros(10,dtype=np.uint16) + np.uint16(-1), the result will be a uint16, and this is what you need to do to force an unsigned result. Otherwise, numpy will assume that the result must be cast to handle negative values.

Upvotes: 1

Related Questions