conradlee
conradlee

Reputation: 13705

Converting numpy dtypes to native python types

If I have a numpy dtype, how do I automatically convert it to its closest python data type? For example,

numpy.float32 -> "python float"
numpy.float64 -> "python float"
numpy.uint32  -> "python int"
numpy.int16   -> "python int"

I could try to come up with a mapping of all of these cases, but does numpy provide some automatic way of converting its dtypes into the closest possible native python types? This mapping need not be exhaustive, but it should convert the common dtypes that have a close python analog. I think this already happens somewhere in numpy.

Upvotes: 385

Views: 489697

Answers (12)

gt6989b
gt6989b

Reputation: 4233

Original Post from 2019-06-20

Sorry to come late to the partly, but I was looking at a problem of converting numpy.float64 to regular Python float only. I saw 3 ways of doing that:

  1. npValue.item()
  2. npValue.astype(float)
  3. float(npValue)

Here are the relevant timings from IPython:

In [1]: import numpy as np

In [2]: aa = np.random.uniform(0, 1, 1000000)

In [3]: %timeit map(float, aa)
10 loops, best of 3: 117 ms per loop

In [4]: %timeit map(lambda x: x.astype(float), aa)
1 loop, best of 3: 780 ms per loop

In [5]: %timeit map(lambda x: x.item(), aa)
1 loop, best of 3: 475 ms per loop

It sounds like float(npValue) seems much faster.


Update from 2024-08-05

A tremendous thank you to Mateo de Mayo for pointing out that

  • in a modern setting, aa[0].astype(float) is of type numpy.float64, not float, so does not solve the original problem;
  • all 3 solutions likely need a container wrapped around then, e.g., tuple(map(...)) or list(map(...)) if you want to store the results in a meaningful way.

Rerunning the other 2 options on a modern Windows laptop with the same settings (C Python 3.11.9 in IPython 8.23.0 with numpy 1.26.4, I get

In [8]: %timeit map(float, aa)
71.4 ns ± 1.53 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

In [9]: %timeit map(lambda x: x.item(), aa)
119 ns ± 3.33 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

Conclusions

So float() is still faster but with only 50% improvement, not a 200% one.

Upvotes: 13

Mike T
Mike T

Reputation: 43702

Use val.item() to convert most NumPy values to a native Python type:

import numpy as np

# for example, numpy.float32 -> python float
val = np.float32(0)
pyval = val.item()
print(type(pyval))         # <class 'float'>

# and similar...
type(np.float64(0).item()) # <class 'float'>
type(np.uint32(0).item())  # <class 'int'>
type(np.int16(0).item())   # <class 'int'>
type(np.cfloat(0).item())  # <class 'complex'>
type(np.datetime64(0, 'D').item())  # <class 'datetime.date'>
type(np.datetime64('2001-01-01 00:00:00').item())  # <class 'datetime.datetime'>
type(np.timedelta64(0, 'D').item()) # <class 'datetime.timedelta'>
...

(A related method np.asscalar(val) was deprecated with 1.16, and removed with 1.23).


For the curious, to build a table of conversions of NumPy array scalars for your system:

for name in dir(np):
    obj = getattr(np, name)
    if hasattr(obj, 'dtype'):
        try:
            if 'time' in name:
                npn = obj(0, 'D')
            else:
                npn = obj(0)
            nat = npn.item()
            print('{0} ({1!r}) -> {2}'.format(name, npn.dtype.char, type(nat)))
        except:
            pass

There are a few NumPy types that have no native Python equivalent on some systems, including: clongdouble, clongfloat, complex192, complex256, float128, longcomplex, longdouble and longfloat. These need to be converted to their nearest NumPy equivalent before using .item().

Upvotes: 550

Meitham
Meitham

Reputation: 9680

numpy holds that information in a mapping exposed as typeDict so you could do something like the below::

>>> import __builtin__ as builtins  # if python2
>>> import builtins                 # if python3

Then::

>>> import numpy as np
>>> {v: k for k, v in np.typeDict.items() if k in dir(builtins)}
{numpy.object_: 'object',
 numpy.bool_: 'bool',
 numpy.string_: 'str',
 numpy.unicode_: 'unicode',
 numpy.int64: 'int',
 numpy.float64: 'float',
 numpy.complex128: 'complex'}

If you want the actual python types rather than their names, you can do ::

>>> {v: getattr(builtins, k) for k, v in np.typeDict.items() if k in vars(builtins)}
{numpy.object_: object,
 numpy.bool_: bool,
 numpy.string_: str,
 numpy.unicode_: unicode,
 numpy.int64: int,
 numpy.float64: float,
 numpy.complex128: complex}

Upvotes: 5

mmdfl
mmdfl

Reputation: 81

If you have an array list_numpy_numbers of numpy types, do the following:

list_native_numbers = [i.item() for i in list_numpy_numbers]

Upvotes: 2

wombatonfire
wombatonfire

Reputation: 5430

A side note about array scalars for those who don't need automatic conversion and know the numpy dtype of the value:

Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., int, float, complex, str, unicode).

Source

Thus, for most cases conversion might not be needed at all, and the array scalar could be used directly. The effect should be identical to using Python scalar:

>>> np.issubdtype(np.int64, int)
True
>>> np.int64(0) == 0
True
>>> np.issubdtype(np.float64, float)
True
>>> np.float64(1.1) == 1.1
True

But if, for some reason, the explicit conversion is needed, using the corresponding Python built-in function is the way to go. As shown in the other answer it's also faster than array scalar item() method.

Upvotes: 1

v.thorey
v.thorey

Reputation: 2087

If you want to convert (numpy.array OR numpy scalar OR native type OR numpy.darray) TO native type you can simply do :

converted_value = getattr(value, "tolist", lambda: value)()

tolist will convert your scalar or array to python native type. The default lambda function takes care of the case where value is already native.

Upvotes: 45

Simon Streicher
Simon Streicher

Reputation: 2826

My approach is a bit forceful, but seems to play nice for all cases:

def type_np2py(dtype=None, arr=None):
    '''Return the closest python type for a given numpy dtype'''

    if ((dtype is None and arr is None) or
        (dtype is not None and arr is not None)):
        raise ValueError(
            "Provide either keyword argument `dtype` or `arr`: a numpy dtype or a numpy array.")

    if dtype is None:
        dtype = arr.dtype

    #1) Make a single-entry numpy array of the same dtype
    #2) force the array into a python 'object' dtype
    #3) the array entry should now be the closest python type
    single_entry = np.empty([1], dtype=dtype).astype(object)

    return type(single_entry[0])

Usage:

>>> type_np2py(int)
<class 'int'>

>>> type_np2py(np.int)
<class 'int'>

>>> type_np2py(str)
<class 'str'>

>>> type_np2py(arr=np.array(['hello']))
<class 'str'>

>>> type_np2py(arr=np.array([1,2,3]))
<class 'int'>

>>> type_np2py(arr=np.array([1.,2.,3.]))
<class 'float'>

Upvotes: 1

Qinhong Ma
Qinhong Ma

Reputation: 9

Translate the whole ndarray instead one unit data object:

def trans(data):
"""
translate numpy.int/float into python native data type
"""
result = []
for i in data.index:
    # i = data.index[0]
    d0 = data.iloc[i].values
    d = []
    for j in d0:
        if 'int' in str(type(j)):
            res = j.item() if 'item' in dir(j) else j
        elif 'float' in str(type(j)):
            res = j.item() if 'item' in dir(j) else j
        else:
            res = j
        d.append(res)
    d = tuple(d)
    result.append(d)
result = tuple(result)
return result

However, it takes some minutes when handling large dataframes. I am also looking for a more efficient solution. Hope a better answer.

Upvotes: -1

Carlos Santos
Carlos Santos

Reputation: 376

tolist() is a more general approach to accomplish this. It works in any primitive dtype and also in arrays or matrices.

I doesn't actually yields a list if called from primitive types:

numpy == 1.15.2

>>> import numpy as np

>>> np_float = np.float64(1.23)
>>> print(type(np_float), np_float)
<class 'numpy.float64'> 1.23

>>> listed_np_float = np_float.tolist()
>>> print(type(listed_np_float), listed_np_float)
<class 'float'> 1.23

>>> np_array = np.array([[1,2,3.], [4,5,6.]])
>>> print(type(np_array), np_array)
<class 'numpy.ndarray'> [[1. 2. 3.]
 [4. 5. 6.]]

>>> listed_np_array = np_array.tolist()
>>> print(type(listed_np_array), listed_np_array)
<class 'list'> [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]

Upvotes: 30

Aryeh Leib Taurog
Aryeh Leib Taurog

Reputation: 5608

You can also call the item() method of the object you want to convert:

>>> from numpy import float32, uint32
>>> type(float32(0).item())
<type 'float'>
>>> type(uint32(0).item())
<type 'long'>

Upvotes: 11

Matt Alcock
Matt Alcock

Reputation: 12901

I think you can just write general type convert function like so:

import numpy as np

def get_type_convert(np_type):
   convert_type = type(np.zeros(1,np_type).tolist()[0])
   return (np_type, convert_type)

print get_type_convert(np.float32)
>> (<type 'numpy.float32'>, <type 'float'>)

print get_type_convert(np.float64)
>> (<type 'numpy.float64'>, <type 'float'>)

This means there is no fixed lists and your code will scale with more types.

Upvotes: 8

unutbu
unutbu

Reputation: 880687

How about:

In [51]: dict([(d, type(np.zeros(1,d).tolist()[0])) for d in (np.float32,np.float64,np.uint32, np.int16)])
Out[51]: 
{<type 'numpy.int16'>: <type 'int'>,
 <type 'numpy.uint32'>: <type 'long'>,
 <type 'numpy.float32'>: <type 'float'>,
 <type 'numpy.float64'>: <type 'float'>}

Upvotes: 14

Related Questions