Reputation: 83167
I generate a npz file as follows:
import numpy as np
import os
# Generate npz file
dataset_text_filepath = 'test_np_load.npz'
texts = []
for text_number in range(30000):
texts.append(np.random.random_integers(0, 20000,
size = np.random.random_integers(0, 100)))
texts = np.array(texts)
np.savez(dataset_text_filepath, texts=texts)
This gives me this ~7MiB npz file (basically only 1 variable texts
, which is a NumPy array of Numpy arrays):
which I load with numpy.load()
:
# Load data
dataset = np.load(dataset_text_filepath)
If I query it as follows, it takes several minutes:
# Querying data: the slow way
for i in range(20):
print('Run {0}'.format(i))
random_indices = np.random.randint(0, len(dataset['texts']), size=10)
dataset['texts'][random_indices]
while if I query as follows, it takes less than 5 seconds:
# Querying data: the fast way
data_texts = dataset['texts']
for i in range(20):
print('Run {0}'.format(i))
random_indices = np.random.randint(0, len(data_texts), size=10)
data_texts[random_indices]
How comes the second method is so much faster than the first one?
Upvotes: 2
Views: 2208
Reputation: 231375
dataset['texts']
reads the file each time it is used. load
of a npz
just returns a file loader, not the actual data. It's a 'lazy loader', loading the particular array only when accessed. The load
docs could be clearer, but they say:
- If the file is a ``.npz`` file, the returned value supports the context
manager protocol in a similar fashion to the open function::
with load('foo.npz') as data:
a = data['a']
The underlying file descriptor is closed when exiting the 'with' block.
and from the savez
:
When opening the saved ``.npz`` file with `load` a `NpzFile` object is
returned. This is a dictionary-like object which can be queried for
its list of arrays (with the ``.files`` attribute), and for the arrays
themselves.
More details in help(np.lib.npyio.NpzFile)
Upvotes: 5