Reputation: 703
I have installed Anaconda 3 64 bit on my laptop and written the following code in Spyder:
import numpy.distutils.system_info as sysinfo
import numpy as np
import platform
sysinfo.platform_bits
platform.architecture()
my_array = np.array([0,1,2,3])
my_array.dtype
Output of these commands show the following:
sysinfo.platform_bits
Out[31]: 64
platform.architecture()
Out[32]: ('64bit', 'WindowsPE')
my_array = np.array([0,1,2,3])
my_array.dtype
Out[33]: dtype('int32')
My question is that even though my system is 64bit, why by default the array type is int32 instead of int64?
Any help is appreciated.
Upvotes: 34
Views: 16614
Reputation: 109
You can explicitly cast the array to the needed data type, like so:
int64_array = int32_array.astype(np.int64)
Upvotes: 1
Reputation: 789
You can create the array with the data type set to int64. E.g.,
#Windows uses int32 by default, but if we want int64, we can tell it to
x = np.array([1, 2, 3, 4, 5], dtype=np.int64)
Upvotes: 1
Reputation: 332
Original poster, Prana, asked a very good question. "Why is the integer default set to 32-bit, on a 64-bit machine?"
As near as I can tell, the short answer is: "Because it was designed wrong". Seems obvious, that a 64-bit machine should default-define an integer in any associated interpreter as 64 bit. But of course, the two answers explain why this is not the case. Things are now different, and so I offer this update.
What I notice is that for both CentOS-7.4 Linux and MacOS 10.10.5 (the new and the old), running Python 2.7.14 (with Numpy 1.14.0 ), (as at January 2018), the default integer is now defined as 64-bit. (The "my_array.dtype" in the initial example would now report "dtype('int64')" on both platforms.
Using 32-bit integers as the default integer in any interpreter can result in very squirrelly results if you are doing integer math, as this question pointed out:
Using numpy to square value gives negative number
It appears now that Python and Numpy have been updated and revised (corrected, one might argue), so that in order to replicate the problem encountered as described in the above question, you have to explicitly define the Numpy array as int32.
In Python, on both platforms now, default integer looks to be int64. This code runs the same on both platforms (CentOS-7.4 and MacOSX 10.10.5):
>>> import numpy as np
>>> tlist = [1, 2, 47852]
>>> t_array = np.asarray(tlist)
>>> t_array.dtype
dtype('int64')
>>> print t_array ** 2
[ 1 4 2289813904]
But if we make the t_array a 32-bit integer, one gets the following, because of the integer calculation rolling over the sign bit in the 32-bit word.
>>> t_array32 = np.asarray(tlist, dtype=np.int32)
>>> t_array32.dtype
dtype*('int32')
>>> print t_array32 ** 2
[ 1 4 -2005153392]
The reason for using int32 is of course, efficiency. There are some situations (such as using TensorFlow or other neural-network machine learning tools), where you want to use 32-bit representations (mostly float, of course), as the speed gains versus using 64-bit floats, can be quite significant.
Upvotes: 8
Reputation: 114811
In Microsoft C, even on a 64 bit system, the size of the long int
data type is 32 bits. (See, for example, https://msdn.microsoft.com/en-us/library/9c3yd98k.aspx.) Numpy inherits the default size of an integer from the C compiler's long int
.
Upvotes: 15
Reputation: 12590
Default integer type np.int_
is C long:
http://docs.scipy.org/doc/numpy-1.10.1/user/basics.types.html
But C long is int32 in win64.
https://msdn.microsoft.com/en-us/library/9c3yd98k.aspx
This is kind of a weirdness of the win64 platform.
Upvotes: 25