Miguel Dickson
Miguel Dickson

Reputation: 198

Google Cloud Dataflow job breaking mysteriously

I am repeatedly trying to run a set of google cloud dataflow jobs that until relatively recently worked routinely and now tend to crash. This error has been the most baffling of all simply because I have no idea what code is being referenced, and it seems to be internal to GCP?

My job ID here is: 2019-02-26_13_27_30-16974532604317793751

I'm running these jobs on n1-standard-96 instances.

For reference, the full trace:

  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 642, in do_work
    work_executor.execute()
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 156, in execute
    op.start()
  File "dataflow_worker/shuffle_operations.py", line 49, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start
    def start(self):
  File "dataflow_worker/shuffle_operations.py", line 50, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start
    with self.scoped_start_state:
  File "dataflow_worker/shuffle_operations.py", line 65, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start
    with self.scoped_process_state:
  File "dataflow_worker/shuffle_operations.py", line 66, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start
    with self.shuffle_source.reader() as reader:
  File "dataflow_worker/shuffle_operations.py", line 68, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start
    for key_values in reader:
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 433, in __iter__
    for entry in entries_iterator:
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 272, in next
    return next(self.iterator)
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 230, in __iter__
    chunk, next_position = self.reader.Read(start_position, end_position)
  File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 133, in shuffle_client.PyShuffleReader.Read
IOError: Shuffle read failed: DATA_LOSS: Missing last fragment of a large value.

Upvotes: 1

Views: 414

Answers (1)

Barry
Barry

Reputation: 495

Perhaps the input data is larger now, and DataFlow can't handle it?

I had a job which was having shuffle issues. It started working when I switched to the optional "shuffle service". You may want to try it. Simply add the following to your job command:

--experiments shuffle_mode=service

Reference: See the "Using Cloud Dataflow Shuffle" section of this page.

Upvotes: 1

Related Questions