populoso
populoso

Reputation: 13

How can I create a Dask array from zipped .npy files?

I have a large dataset stored as zipped npy files. How can I stack a given subset of these into a Dask array?

I'm aware of dask.array.from_npy_stack but I don't know how to use it for this.

Here's a crude first attempt that uses up all my memory:

import numpy as np
import dask.array as da

data = np.load('data.npz')

def load(files):
    list_ = [da.from_array(data[file]) for file in files]
    return da.stack(list_)

x = load(['foo', 'bar'])

Upvotes: 1

Views: 1108

Answers (1)

MRocklin
MRocklin

Reputation: 57261

Well, you can't load a large npz file into memory, because then you're already out of memory. I would read each one in in a delayed fashion, and then call da.from_array and da.stack as you sort of are in your example.

Here are some docs that may help if you haven't seen them before: https://docs.dask.org/en/latest/array-creation.html#using-dask-delayed

Upvotes: 1

Related Questions