user_1177868
user_1177868

Reputation: 444

h5py, sporadic writing errors

I have some float numbers to be stored in a big (500K x 500K) matrix. I am storing them in chunks, by using arrays of variable sizes (in accordance to some specific conditions).

I have a parallellised code (Python3.3 and h5py) which produces the arrays and put them in a shared queue, and one dedicated process that pops from the queue and writes them one-by-one in the HDF5 matrix. It works as expected approximately 90% of the time.

Occasionally, I got writing errors for specific arrays. If I run it multiple times, the faulty arrays change all the times.

Here's the code:

def writer(in_q):
    # Open HDF5 archive
    hdf5_file = h5py.File("./google_matrix_test.hdf5")
    hdf5_scores = hdf5_file['scores']
    while True:
        # Get some data
        try:
            data = in_q.get(timeout=5)
        except:
            hdf5_file.flush()
            print('HDF5 archive updated.')
            break
        # Process the data
        try:
            hdf5_scores[data[0], data[1]:data[2]+1] = numpy.matrix(data[3:])
        except:
            # Print faulty chunk's info
            print('E: ' + str(data[0:3]))
            in_q.put(data)  # <- doesn't solve
        in_q.task_done()

def compute():
    jobs_queue = JoinableQueue()
    scores_queue = JoinableQueue()

    processes = []
    processes.append(Process(target=producer, args=(jobs_queue, data,)))
    processes.append(Process(target=writer, args=(scores_queue,)))
    for i in range(10):
        processes.append(Process(target=consumer, args=(jobs_queue,scores_queue,)))

    for p in processes:
        p.start()

    processes[1].join()
    scores_queue.join()

Here's the error:

Process Process-2:
Traceback (most recent call last):
    File "/local/software/python3.3/lib/python3.3/multiprocessing/process.py", line 258, in _bootstrap
        self.run()
    File "/local/software/python3.3/lib/python3.3/multiprocessing/process.py", line 95, in run
        self._target(*self._args, **self._kwargs)
    File "./compute_scores_multiprocess.py", line 104, in writer
        hdf5_scores[data[0], data[1]:data[2]+1] = numpy.matrix(data[3:])
    File "/local/software/python3.3/lib/python3.3/site-packages/h5py/_hl/dataset.py", line 551, in __setitem__
        self.id.write(mspace, fspace, val, mtype)
    File "h5d.pyx", line 217, in h5py.h5d.DatasetID.write (h5py/h5d.c:2925)
    File "_proxy.pyx", line 120, in h5py._proxy.dset_rw (h5py/_proxy.c:1491)
    File "_proxy.pyx", line 93, in h5py._proxy.H5PY_H5Dwrite (h5py/_proxy.c:1301)
OSError: can't write data (Dataset: Write failed)

If I insert a pause of two seconds (time.sleep(2)) among writing tasks then the problem seems solved (although I cannot waste 2 seconds per writing since I need to write more than 250.000 times). If I capture the writing exception and put the faulty array in the queue, the script will never stop (presumably).

I am using CentOS (2.6.32-279.11.1.el6.x86_64). Any insight?

Thanks a lot.

Upvotes: 0

Views: 1090

Answers (1)

Andrew Collette
Andrew Collette

Reputation: 741

When using the multiprocessing module with HDF5, the only big restriction is that you can't have any files open (even read-only) when fork() is called. In other words, if you open a file in the master process to write, and then Python spins off a subprocess for computation, there may be problems. It has to do with how fork() works and the choices HDF5 itself makes about how to handle file descriptors.

My advice is to double-check your application to make sure you're creating any Pools, etc. before opening the master file for writing.

Upvotes: 1

Related Questions