Reputation: 33
I have a problem when running spark-shell. Im getting the following error message every time:
Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster!
I did the following steps to fix the problem, but without any results.
The strangely fact is, that the spark-shell work for one time after a restart of the service. After this try the spark-shell doesn't load the correct executor-memory. It starts every time with 1GB
Hope someone can help me to fix the problem.
Kind Regards
hbenner89
Upvotes: 2
Views: 6146
Reputation: 46
I faced same problem, reduced executor memory to 512MB and it worked. It is with assumption that 512MB is sufficient for your program.
spark-submit --proxy-user spark --master yarn --deploy-mode client --name pi --conf "spark.app.id=pi" --driver-memory 512M --executor-memory 512M pi.py
Upvotes: 2
Reputation: 33
Thanks for your answer. Helped a lot for understanding the memory options.
I found the problem: the problem was not the executor-memory. I changed the yarn.nodemanager.resource.memory-mb to 2GB. After this change all works fine.
Upvotes: 1
Reputation: 542
Have you tried allocating more than 1g of memory since it is complaining that it needs more.
I would try running with 2g as a test.
bin/spark-shell --executor-memory 2g --master yarn
Be sure to leave a small cushion for the OS so it doesn't take up the entire system's memory.
This option also applies to the standalone mode you've been using, but if you have been using the ec2 scripts, we set "spark.executor.memory" in conf/spark-defaults.conf to do this automatically so you don't have to specify it each time on the command line. You can also do the same in YARN.
Upvotes: 1