Reputation: 12433
I have numpy one-dimension array like this:
array([0.6441961 , 0.36957273, 1. , 0.4898495 , 0.24318133,
0.3721704 , 0.3205053 , 0.16859561, 0.26045567, 0.5081331 ,
0.66135716, 0.63181865])
My code is below, simply shows the variable.
print(a)
print(np.sum(a,dtype='int16'))
It shows like this below
[0.6441961 0.36957273 1. 0.4898495 0.24318133 0.3721704
0.3205053 0.16859561 0.26045567 0.5081331 0.66135716 0.63181865]
1
Why does it return 1?
Upvotes: 1
Views: 44
Reputation: 14399
You want:
int(a.sum())
or
a.sum.astype('int16') # totally didn't see @DaniMesejo's answer before I edited this in
What's wrong:
np.sum(a, dtype='int16')
casts a
to int16
before summing. And casting to int
does a .floor
operation implicitly. So everything in your array that's not 1.
becomes 0
- and then the sum is 1
Upvotes: 4
Reputation: 61910
When you do:
res = np.sum(a, dtype='int16')
it says transform every element of a to int16
and then sum, so as every element is between 0 and 1 but 1.0, it transforms them to 0
. Let's change an element of the array to verify this claim:
a = np.array([0.6441961, 0.36957273, 1., 2.4898495, 0.24318133, 0.3721704,
0.3205053, 0.16859561, 0.26045567, 0.5081331, 0.66135716, 0.63181865])
res = np.sum(a, dtype='int16')
print(res)
Output
3
The output is 3 because we now have 1. and 2.4898495. One solution to your problem is to do:
res = np.sum(a).astype('int16')
print(res)
Output
5
Upvotes: 4