Karnivaurus
Karnivaurus

Reputation: 24111

NumPy array is not JSON serializable

After creating a NumPy array, and saving it as a Django context variable, I receive the following error when loading the webpage:

array([   0,  239,  479,  717,  952, 1192, 1432, 1667], dtype=int64) is not JSON serializable

What does this mean?

Upvotes: 470

Views: 573870

Answers (17)

Andreas H.
Andreas H.

Reputation: 6073

Something which has not been adressed so far, but which I think is worth mentioning (even though not an answer in the strict sense):

One should be aware that JSON is inherently a bad choice for numpy/numerical data, because of the following reasons:

  • JSON does not support Inf and Nan. So a numpy array containing infinity or Not a Number cannot be serialized to standard JSON. One has to switch on special JSON serializer extentions. However expect problems reading such a JSON into another tool where you do not have control over the source code. Some JSON encoders also serialize inf and nan to strings ("Inf" and "NaN") by default, which is complete madness. This prevents any automatic de-derialization (Your array will be a mixture of numbers and strings with need for post processing)
  • JSON does not give any precision guarantuees. So serializing and then deserializing numbers is generally giving different result. Also there is no guarantuee on rounding errors. It is completely at the discretion of the json serializer library.
  • The text to number conversion has high computational overhead.
  • There are no true multidimensional arrays. You may argue "not true", we can use an array of arrays. But this not the same as a multidimensional array. And it is only a minor issue that the inner arrays have to have the same length. But one still does not have round trip capability in all cases. For instance a 0xN matrix cannot be stored, when deserializing as array of arrays it will appear as 1d array of length 0. So a 2d array is converted to 1d array and the inner dimension is lost. For any use-case working with matrices this is likely to cause huge problems.

To summarize:

  • JSON is made for use-cases where numerical or numerical vector data does not occur. The standard pays no real attention to this. So any user who wants to use numerical/numerical array data with JSON should be aware of that.
  • Regarding the inf and Nan issue: It is better to switch to YAML. YAML 1.2 is a strict superset of JSON (any valid JSON is also valid YAML) and it does support inf and Nan (.inf and .nan are the literals). So better use a YAML serializer operated in inline mode, which looks like JSON.

Upvotes: 0

Jeff Hykin
Jeff Hykin

Reputation: 2617

The other answers will not work if someone else's code (e.g. a module) is doing the json.dumps(). This happens often, for example with webservers that auto-convert their return responses to JSON, meaning we can't always change the arguments for json.dump() .


This answer solves that, and is based off a (relatively) new solution that works for any 3rd party class (not just numpy).

TLDR

pip install json_fix

import json_fix # import this anytime before the JSON.dumps gets called
import json

# create a converter
import numpy
json.fallback_table[numpy.ndarray] = lambda array: array.tolist()

# no additional arguments needed: 
json.dumps(
   dict(thing=10, nested_data=numpy.array((1,2,3)))
)
#>>> '{"thing": 10, "nested_data": [1, 2, 3]}' 

Upvotes: 5

karlB
karlB

Reputation: 4731

Store as JSON a numpy.ndarray or any nested-list composition.

class NumpyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        return super().default(obj)

a = np.array([[1, 2, 3], [4, 5, 6]])
print(a.shape)
json_dump = json.dumps({'a': a, 'aa': [2, (2, 3, 4), a], 'bb': [2]}, 
                       cls=NumpyEncoder)
print(json_dump)

Will output:

(2, 3)
{"a": [[1, 2, 3], [4, 5, 6]], "aa": [2, [2, 3, 4], [[1, 2, 3], [4, 5, 6]]], "bb": [2]}

To restore from JSON:

json_load = json.loads(json_dump)
a_restored = np.asarray(json_load["a"])
print(a_restored)
print(a_restored.shape)

Will output:

[[1 2 3]
 [4 5 6]]
(2, 3)

Upvotes: 463

rootMighty
rootMighty

Reputation: 1

i've had the same problem but a little bit different because my values are from type float32 and so i addressed it converting them to simple float(values).

Upvotes: 0

travelingbones
travelingbones

Reputation: 8408

I regularly "jsonify" np.arrays. Try using the ".tolist()" method on the arrays first, like this:

import numpy as np
import codecs, json 

a = np.arange(10).reshape(2,5) # a 2 by 5 array
b = a.tolist() # nested lists with same data, indices
file_path = "/path.json" ## your path variable
json.dump(b, codecs.open(file_path, 'w', encoding='utf-8'), 
          separators=(',', ':'), 
          sort_keys=True, 
          indent=4) ### this saves the array in .json format

In order to "unjsonify" the array use:

obj_text = codecs.open(file_path, 'r', encoding='utf-8').read()
b_new = json.loads(obj_text)
a_new = np.array(b_new)

Upvotes: 493

moshevi
moshevi

Reputation: 5923

Use the json.dumps default kwarg:

default should be a function that gets called for objects that can’t otherwise be serialized. ... or raise a TypeError

In the default function check if the object is from the module numpy, if so either use ndarray.tolist for a ndarray or use .item for any other numpy specific type.

import numpy as np

def default(obj):
    if type(obj).__module__ == np.__name__:
        if isinstance(obj, np.ndarray):
            return obj.tolist()
        else:
            return obj.item()
    raise TypeError('Unknown type:', type(obj))

dumped = json.dumps(data, default=default)

Upvotes: 54

tsveti_iko
tsveti_iko

Reputation: 7952

I found the best solution if you have nested numpy arrays in a dictionary:

import json
import numpy as np

class NumpyEncoder(json.JSONEncoder):
    """ Special json encoder for numpy types """
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        return json.JSONEncoder.default(self, obj)

dumped = json.dumps(data, cls=NumpyEncoder)

with open(path, 'w') as f:
    json.dump(dumped, f)

Thanks to this guy.

Upvotes: 104

krishna kumar mishra
krishna kumar mishra

Reputation: 179

use NumpyEncoder it will process json dump successfully.without throwing - NumPy array is not JSON serializable

import numpy as np
import json
from numpyencoder import NumpyEncoder
arr = array([   0,  239,  479,  717,  952, 1192, 1432, 1667], dtype=int64) 
json.dumps(arr,cls=NumpyEncoder)

Upvotes: 2

TypeError: array([[0.46872085, 0.67374235, 1.0218339 , 0.13210179, 0.5440686 , 0.9140083 , 0.58720225, 0.2199381 ]], dtype=float32) is not JSON serializable

The above-mentioned error was thrown when i tried to pass of list of data to model.predict() when i was expecting the response in json format.

> 1        json_file = open('model.json','r')
> 2        loaded_model_json = json_file.read()
> 3        json_file.close()
> 4        loaded_model = model_from_json(loaded_model_json)
> 5        #load weights into new model
> 6        loaded_model.load_weights("model.h5")
> 7        loaded_model.compile(optimizer='adam', loss='mean_squared_error')
> 8        X =  [[874,12450,678,0.922500,0.113569]]
> 9        d = pd.DataFrame(X)
> 10       prediction = loaded_model.predict(d)
> 11       return jsonify(prediction)

But luckily found the hint to resolve the error that was throwing The serializing of the objects is applicable only for the following conversion Mapping should be in following way object - dict array - list string - string integer - integer

If you scroll up to see the line number 10 prediction = loaded_model.predict(d) where this line of code was generating the output of type array datatype , when you try to convert array to json format its not possible

Finally i found the solution just by converting obtained output to the type list by following lines of code

prediction = loaded_model.predict(d)
listtype = prediction.tolist() return jsonify(listtype)

Bhoom! finally got the expected output, enter image description here

Upvotes: 0

Robert GRZELKA
Robert GRZELKA

Reputation: 109

May do simple for loop with checking types:

with open("jsondontdoit.json", 'w') as fp:
    for key in bests.keys():
        if type(bests[key]) == np.ndarray:
            bests[key] = bests[key].tolist()
            continue
        for idx in bests[key]:
            if type(bests[key][idx]) == np.ndarray:
                bests[key][idx] = bests[key][idx].tolist()
    json.dump(bests, fp)
    fp.close()

Upvotes: 1

steco
steco

Reputation: 1373

You could also use default argument for example:

def myconverter(o):
    if isinstance(o, np.float32):
        return float(o)

json.dump(data, default=myconverter)

Upvotes: 4

KS HARSHA
KS HARSHA

Reputation: 67

This is a different answer, but this might help to help people who are trying to save data and then read it again.
There is hickle which is faster than pickle and easier.
I tried to save and read it in pickle dump but while reading there were lot of problems and wasted an hour and still didn't find solution though I was working on my own data to create a chat bot.

vec_x and vec_y are numpy arrays:

data=[vec_x,vec_y]
hkl.dump( data, 'new_data_file.hkl' )

Then you just read it and perform the operations:

data2 = hkl.load( 'new_data_file.hkl' )

Upvotes: 1

Mark
Mark

Reputation: 19969

This is not supported by default, but you can make it work quite easily! There are several things you'll want to encode if you want the exact same data back:

  • The data itself, which you can get with obj.tolist() as @travelingbones mentioned. Sometimes this may be good enough.
  • The data type. I feel this is important in quite some cases.
  • The dimension (not necessarily 2D), which could be derived from the above if you assume the input is indeed always a 'rectangular' grid.
  • The memory order (row- or column-major). This doesn't often matter, but sometimes it does (e.g. performance), so why not save everything?

Furthermore, your numpy array could part of your data structure, e.g. you have a list with some matrices inside. For that you could use a custom encoder which basically does the above.

This should be enough to implement a solution. Or you could use json-tricks which does just this (and supports various other types) (disclaimer: I made it).

pip install json-tricks

Then

data = [
    arange(0, 10, 1, dtype=int).reshape((2, 5)),
    datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),
    1 + 2j,
    Decimal(42),
    Fraction(1, 3),
    MyTestCls(s='ub', dct={'7': 7}),  # see later
    set(range(7)),
]
# Encode with metadata to preserve types when decoding
print(dumps(data))

Upvotes: 9

John Zwinck
John Zwinck

Reputation: 249093

You can use Pandas:

import pandas as pd
pd.Series(your_array).to_json(orient='values')

Upvotes: 76

JLT
JLT

Reputation: 760

I had a similar problem with a nested dictionary with some numpy.ndarrays in it.

def jsonify(data):
    json_data = dict()
    for key, value in data.iteritems():
        if isinstance(value, list): # for lists
            value = [ jsonify(item) if isinstance(item, dict) else item for item in value ]
        if isinstance(value, dict): # for nested lists
            value = jsonify(value)
        if isinstance(key, int): # if key is integer: > to string
            key = str(key)
        if type(value).__module__=='numpy': # if value is numpy.*: > to python list
            value = value.tolist()
        json_data[key] = value
    return json_data

Upvotes: 6

Roei Bahumi
Roei Bahumi

Reputation: 3673

Here is an implementation that work for me and removed all nans (assuming these are simple object (list or dict)):

from numpy import isnan

def remove_nans(my_obj, val=None):
    if isinstance(my_obj, list):
        for i, item in enumerate(my_obj):
            if isinstance(item, list) or isinstance(item, dict):
                my_obj[i] = remove_nans(my_obj[i], val=val)

            else:
                try:
                    if isnan(item):
                        my_obj[i] = val
                except Exception:
                    pass

    elif isinstance(my_obj, dict):
        for key, item in my_obj.iteritems():
            if isinstance(item, list) or isinstance(item, dict):
                my_obj[key] = remove_nans(my_obj[key], val=val)

            else:
                try:
                    if isnan(item):
                        my_obj[key] = val
                except Exception:
                    pass

    return my_obj

Upvotes: 1

ntk4
ntk4

Reputation: 1307

Also, some very interesting information further on lists vs. arrays in Python ~> Python List vs. Array - when to use?

It could be noted that once I convert my arrays into a list before saving it in a JSON file, in my deployment right now anyways, once I read that JSON file for use later, I can continue to use it in a list form (as opposed to converting it back to an array).

AND actually looks nicer (in my opinion) on the screen as a list (comma seperated) vs. an array (not-comma seperated) this way.

Using @travelingbones's .tolist() method above, I've been using as such (catching a few errors I've found too):

SAVE DICTIONARY

def writeDict(values, name):
    writeName = DIR+name+'.json'
    with open(writeName, "w") as outfile:
        json.dump(values, outfile)

READ DICTIONARY

def readDict(name):
    readName = DIR+name+'.json'
    try:
        with open(readName, "r") as infile:
            dictValues = json.load(infile)
            return(dictValues)
    except IOError as e:
        print(e)
        return('None')
    except ValueError as e:
        print(e)
        return('None')

Hope this helps!

Upvotes: 2

Related Questions