Tom Hemmes
Tom Hemmes

Reputation: 2060

Batch reading and writing, from textfile to HDF5 in python

Goal is to feed large datasets to Tensorflow. I came to the following implementation. However, while io of HDF5 is supposed to be very fast my implementation is slow. Is this due to not using the chunks function? I do not seem to get the dimensions right for the chunks, should I see this as a third dimension. Like; (4096, 7, 1000) for chunksize 1000?

Please note, I could have simplified my code below more by finding solution for a single generator. However, I think the data/label combination is very common and usefull for others.

I use the following function to create two generators, one for the data and one for the corresponding labels.

def read_chunks(file, dim, batch_size=batch_size):
    chunk = np.empty(dim,)
    current_size = 1
    # read input file line by line
    for line in file:
        current_size += 1
        # build chunk
        chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))
        # reaches batch size
        if current_size == batch_size:
            yield chunk
            # reset counters
            current_size = 1
            chunk = np.empty(dim,)

Then I wish move the data and labels produced by these generators to HDF5.

def write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype):
    # remove existing file
    if os.path.isfile(out_file):
        os.remove(out_file)
    with h5py.File(out_file, 'a') as f:
        # create a dataset and labelset in the same file
        d = f.create_dataset('data', (batch_size,data_dim), maxshape=(None,data_dim), dtype=data_dtype)
        l = f.create_dataset('label', (batch_size,label_dim), maxshape=(None,label_dim), dtype=label_dtype)
        # use generators to fill both sets
        for data in data_gen:
            d.resize(d.shape[0]+batch_size, axis=0)
            d[-batch_size:] = data
            l.resize(l.shape[0]+batch_size, axis=0)
            l[-batch_size:] = next(label_gen)

With the following constants I combined both functions like so;

batch_size = 4096
h5_batch_size = 1000
data_dim = 7 #[NUM_POINT, 9]
label_dim = 1 #[NUM_POINT]
data_dtype = 'float32'
label_dtype = 'uint8'

for data_file, label_file in data_label_files:
    print(data_file)
    with open(data_file, 'r') as data_f, open(label_file, 'r') as label_f:
        data_gen = read_chunks(data_f, dim=data_dim)
        label_gen = read_chunks(label_f, dim=label_dim)
        out_file = data_file[:-4] + '.h5'
        write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype)

Upvotes: 0

Views: 1053

Answers (1)

John Zwinck
John Zwinck

Reputation: 249464

The problem is not that HDF5 is slow. The problem is that you are reading a single line at a time using a Python loop, calling genfromtxt() once per line! That function is meant to read entire files. And then you use the anti-pattern of "array = vstack(array, newstuff)` in the same loop.

In short, your performance problem starts here:

    chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))

You should just read the entire file at once. If you can't do that, read half of it (you can set a max number of lines to read each time, such as 1 million).

Upvotes: 3

Related Questions