Kanad
Kanad

Reputation: 1821

How to fix 'Object arrays cannot be loaded when allow_pickle=False' for imdb.load_data() function?

I'm trying to implement the binary classification example using the IMDb dataset in Google Colab. I have implemented this model before. But when I tried to do it again after a few days, it returned a value error: 'Object arrays cannot be loaded when allow_pickle=False' for the load_data() function.

I have already tried solving this, referring to an existing answer for a similar problem: How to fix 'Object arrays cannot be loaded when allow_pickle=False' in the sketch_rnn algorithm. But it turns out that just adding an allow_pickle argument isn't sufficient.

My code:

from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

The error:

ValueError                                Traceback (most recent call last)
<ipython-input-1-2ab3902db485> in <module>()
      1 from keras.datasets import imdb
----> 2 (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

2 frames
/usr/local/lib/python3.6/dist-packages/keras/datasets/imdb.py in load_data(path, num_words, skip_top, maxlen, seed, start_char, oov_char, index_from, **kwargs)
     57                     file_hash='599dadb1135973df5b59232a0e9a887c')
     58     with np.load(path) as f:
---> 59         x_train, labels_train = f['x_train'], f['y_train']
     60         x_test, labels_test = f['x_test'], f['y_test']
     61 

/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in __getitem__(self, key)
    260                 return format.read_array(bytes,
    261                                          allow_pickle=self.allow_pickle,
--> 262                                          pickle_kwargs=self.pickle_kwargs)
    263             else:
    264                 return self.zip.read(key)

/usr/local/lib/python3.6/dist-packages/numpy/lib/format.py in read_array(fp, allow_pickle, pickle_kwargs)
    690         # The array contained Python objects. We need to unpickle the data.
    691         if not allow_pickle:
--> 692             raise ValueError("Object arrays cannot be loaded when "
    693                              "allow_pickle=False")
    694         if pickle_kwargs is None:

ValueError: Object arrays cannot be loaded when allow_pickle=False

Upvotes: 163

Views: 294661

Answers (27)

Fenil
Fenil

Reputation: 79

change to this line of code worked for me and solved the error.

data_dict = np.load(data_path, encoding='latin1', allow_pickle=True).item()

do check if your numpy module is imported properly. it will replace the deprecated version.

Upvotes: 2

Tarun Gandotra
Tarun Gandotra

Reputation: 51

If you are loading compressed storage file like npz format then below code will do good

np.load(path, allow_pickle=True)

Make sure while specifying path, you are surrounding it with single quotes and allow_pickle = True shouldn't be there in any quotes.

Upvotes: 3

Ivan Borshchov
Ivan Borshchov

Reputation: 3610

There are a lot of answers, but to really understand the issue I recommend you just try next on simple example:

a=np.array([[1, 2, 3], [4, 5, 6]])
# Object array
b={'data':'somet',
   'data_2':'defin'}
#Save arrays into file
np.savez('/content/123.npz', a=a, b=b)
#Load file into data variable
data = np.load('/content/123.npz')
print(data['b'])

This simple example already reproduces the error. Thing is that you had dictionary serialized in npz,

now jus ttry to replace line with np.load with:

data = np.load('/content/123.npz',allow_pickle=True)

And it works! Source of example: fix object arrays cannot be loaded when allow_pickle=False

Upvotes: 2

Mustafa Sakhai
Mustafa Sakhai

Reputation: 21

[Fast Solution] I got it worked by modifying "allow_pickle" when calling np.load:

labels = np.load("Labels",allow_pickle=True)

Upvotes: 2

Elijah Awe
Elijah Awe

Reputation: 11

Instead of

from keras.datasets import imdb

use

from tensorflow.keras.datasets import imdb

top_words = 10000
((x_train, y_train), (x_test, y_test)) = imdb.load_data(num_words=top_words, seed=21)

Upvotes: 1

Madhuparna Bhowmik
Madhuparna Bhowmik

Reputation: 2290

I just used allow_pickle = True as an argument to np.load() and it worked for me.

np.load(path, allow_pickle=True)

Upvotes: 115

Mudasir Habib
Mudasir Habib

Reputation: 848

I was facing the same issue, here is line from error

File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 260, in __getitem__

So i solve the issue by updating "npyio.py" file. In npyio.py line 196 assigning value to allow_pickle so i update this line as

self.allow_pickle = True

Upvotes: 1

Shaina Raza
Shaina Raza

Reputation: 1648

This error comes when you have the previous version of torch like 1.6.0 with torchvision==0.7.0, you may check yours torch version through this command:

import tensorflow
print(tensorflow.__version__)

this error is already resolved in the newer version of torch.

you can remove this error through making the following change in np.load()

np.load(somepath, allow_pickle=True)

The allow_pickle=True will solve it

Upvotes: 2

Carlos S Traynor
Carlos S Traynor

Reputation: 141

The error also can occur if you try to save a python list of numpy arrays with np.save and load with np.load. I am only saying it for the sake of googler's to check out that this is not the issue. Also using allow_pickle=True fixed the issue if a list is indeed what you meant to save and load.

Upvotes: 3

Otabek
Otabek

Reputation: 41

Use this

 from tensorflow.keras.datasets import imdb

instead of this

 from keras.datasets import imdb

Upvotes: 3

Shristi Gupta
Shristi Gupta

Reputation: 29

I landed up here, tried your ways and could not figure out.

I was actually working on a pregiven code where

pickle.load(path)

was used so i replaced it with

np.load(path, allow_pickle=True)

Upvotes: 2

Nasif Imtiaz Ohi
Nasif Imtiaz Ohi

Reputation: 1711

The easiest way is to change imdb.py setting allow_pickle=True to np.load at the line where imdb.py throws error.

Upvotes: 1

travs15
travs15

Reputation: 462

In my case worked with:

np.load(path, allow_pickle=True)

Upvotes: 31

Farid Khafizov
Farid Khafizov

Reputation: 1200

none of the above listed solutions worked for me: i run anaconda with python 3.7.3. What worked for me was

  • run "conda install numpy==1.16.1" from Anaconda powershell

  • close and reopen the notebook

Upvotes: 5

James
James

Reputation: 11

I don't usually post to these things but this was super annoying. The confusion comes from the fact that some of the Keras imdb.py files have already updated:

with np.load(path) as f:

to the version with allow_pickle=True. Make sure check the imdb.py file to see if this change was already implemented. If it has been adjusted, the following works fine:

from keras.datasets import imdb
(train_text, train_labels), (test_text, test_labels) = imdb.load_data(num_words=10000)

Upvotes: 1

Sajad Norouzi
Sajad Norouzi

Reputation: 1920

The answer of @cheez sometime doesn't work and recursively call the function again and again. To solve this problem you should copy the function deeply. You can do this by using the function partial, so the final code is:

import numpy as np
from functools import partial

# save np.load
np_load_old = partial(np.load)

# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)

# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = 
imdb.load_data(num_words=10000)

# restore np.load for future normal usage
np.load = np_load_old

Upvotes: 2

SidK
SidK

Reputation: 1214

What I have found is that TensorFlow 2.0 (I am using 2.0.0-alpha0) is not compatible with the latest version of Numpy i.e. v1.17.0 (and possibly v1.16.5+). As soon as TF2 is imported, it throws a huge list of FutureWarning, that looks something like this:

FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

This also resulted in the allow_pickle error when tried to load imdb dataset from keras

I tried to use the following solution which worked just fine, but I had to do it every single project where I was importing TF2 or tf.keras.

np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)

The easiest solution I found was to either install numpy 1.16.1 globally, or use compatible versions of tensorflow and numpy in a virtual environment.

My goal with this answer is to point out that its not just a problem with imdb.load_data, but a larger problem vaused by incompatibility of TF2 and Numpy versions and may result in many other hidden bugs or issues.

Upvotes: 2

ReimuChan
ReimuChan

Reputation: 21

Its work for me

        np_load_old = np.load
        np.load = lambda *a: np_load_old(*a, allow_pickle=True)
        (x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=None, test_split=0.2)
        np.load = np_load_old

Upvotes: 2

Yasser Elbarbay
Yasser Elbarbay

Reputation: 133

on jupyter notebook using

np_load_old = np.load

# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)

worked fine, but the problem appears when you use this method in spyder(you have to restart the kernel every time or you will get an error like:

TypeError : () got multiple values for keyword argument 'allow_pickle'

I solved this issue using the solution here:

Upvotes: 4

user122933
user122933

Reputation:

Here's a trick to force imdb.load_data to allow pickle by, in your notebook, replacing this line:

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

by this:

import numpy as np
# save np.load
np_load_old = np.load

# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)

# call load_data with allow_pickle implicitly set to true
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

# restore np.load for future normal usage
np.load = np_load_old

Upvotes: 160

Leonhard Rathnak
Leonhard Rathnak

Reputation: 31

find the path to imdb.py then just add the flag to np.load(path,...flag...)

    def load_data(.......):
    .......................................
    .......................................
    - with np.load(path) as f:
    + with np.load(path,allow_pickle=True) as f:

Upvotes: 3

You can try changing the flag's value

np.load(training_image_names_array,allow_pickle=True)

Upvotes: 12

Gustavo Mirapalheta
Gustavo Mirapalheta

Reputation: 997

I think the answer from cheez (https://stackoverflow.com/users/122933/cheez) is the easiest and most effective one. I'd elaborate a little bit over it so it would not modify a numpy function for the whole session period.

My suggestion is below. I´m using it to download the reuters dataset from keras which is showing the same kind of error:

old = np.load
np.load = lambda *a,**k: old(*a,**k,allow_pickle=True)

from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)

np.load = old
del(old)

Upvotes: 11

Joocheol Kim
Joocheol Kim

Reputation: 134

Tensorflow has a fix in tf-nightly version.

!pip install tf-nightly

The current version is '2.0.0-dev20190511'.

Upvotes: 0

MappaGnosis
MappaGnosis

Reputation: 1169

Following this issue on GitHub, the official solution is to edit the imdb.py file. This fix worked well for me without the need to downgrade numpy. Find the imdb.py file at tensorflow/python/keras/datasets/imdb.py (full path for me was: C:\Anaconda\Lib\site-packages\tensorflow\python\keras\datasets\imdb.py - other installs will be different) and change line 85 as per the diff:

-  with np.load(path) as f:
+  with np.load(path, allow_pickle=True) as f:

The reason for the change is security to prevent the Python equivalent of an SQL injection in a pickled file. The change above will ONLY effect the imdb data and you therefore retain the security elsewhere (by not downgrading numpy).

Upvotes: 64

Wissam
Wissam

Reputation: 81

Yes, installing previous a version of numpy solved the problem.

For those who uses PyCharm IDE:

in my IDE (Pycharm), File->Settings->Project Interpreter: I found my numpy to be 1.16.3, so I revert back to 1.16.1. Click + and type numpy in the search, tick "specify version" : 1.16.1 and choose--> install package.

Upvotes: 1

Tirth Patel
Tirth Patel

Reputation: 1905

This issue is still up on keras git. I hope it gets solved as soon as possible. Until then, try downgrading your numpy version to 1.16.2. It seems to solve the problem.

!pip install numpy==1.16.1
import numpy as np

This version of numpy has the default value of allow_pickle as True.

Upvotes: 111

Related Questions