Reputation: 427
I am trying to run an experiment on AzureML through the notebook. I get the above error on trying to read a dataset created in previous step.
I checked the memory usage through command - df -h
and it looks ok. I checked git links with same error, but that doesn't appear to have been resolved.
Github issues link
What is going wrong here?
Below line of code gives the error. This line had run successfully just a day ago on same workspace, using same compute.
Below is the screen of memory:
Upvotes: 2
Views: 1502
Reputation: 812
our team is working on a fix, sorry about the inconvenience.
Until then you have two possible work-around: You can switch to Jupyter notebook instead of using integrated notebook or you can add below snippet to your notebook:
import oscurrent_dir = os.getcwd()
if not current_dir.endswith('/code'):
os.chdir(current_dir+'/code')
Upvotes: 1
Reputation: 1
I was experiencing the same issue this morning - exact same error message as in Charl's answer, just when calling Dataset.get_by_name(ws, 'dataset-name')
from a notebook.
Then this error seemed to resolve itself, with no changes on my end, but now many other cells in the notebook that would make very little memory demand (e.g. instantiating OutputDatasetConfig(...)
, or trying to do anything at all with the dataset) are throwing the same error.
Upvotes: 0
Reputation: 41
I have the same issue. Have been using a AML notebook with 7 GB RAM to train a timeseries model on 5 MB of data. Training would complete in 2 seconds. Been doing this every other day for months and now I am getting MemoryError: Engine process terminated. This is most likely due to system running out of memory. Please retry with increased memory.
Strange that we get the error after about 1 second, which is not what you would expect with a MemoryError. This even after I upgrade to 28 GB RAM. Something must have changed on AML side? But there is nothing in the release notes.
Upvotes: 0