Alex K
Alex K

Reputation: 139

Azure Databricks: Error, Specified heap memory (4096MB) is above the maximum executor memory (3157MB) allowed for node type Standard_F4

I keep getting org.apache.spark.SparkException: Job aborted when I try to save my flattened json file in azure blob as csv. Some answers that I have found recomends to increase the executor memory. Which I have done here:
enter image description here

I get this error when I try to save the config:

enter image description here

What do I need to do to solve this issue?

EDIT

Adding part of the stacktrace that is causing org.apache.spark.SparkException: Job aborted. I have also tried with and without coalesce when saving my flattend dataframe:

ERROR FileFormatWriter: Aborting job 0d8c01f9-9ff3-4297-b677-401355dca6c4.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 79.0 failed 4 times, most recent failure: Lost task 0.3 in stage 79.0 (TID 236) (10.139.64.7 executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Command exited with code 52
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3312)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3244)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3235)

Upvotes: 0

Views: 1617

Answers (1)

Pratik Lad
Pratik Lad

Reputation: 8341

Experiencing similar error when executing the spark.executor.memory 4g command on my cluster with similar worker node.

> Written with [StackEdit](https://stackedit.io/).

The cause of the error is mainly the limit of executor memory in specific cluster node is 3 Gb and you are passing the value as 4 Gb as error message suggests.

Resolution:

  • Give spark.executor.memory less than 3Gb.
  • Select the bigger worker type Standard_F8, Standard_F16 etc.

Upvotes: 2

Related Questions