Jay Cee
Jay Cee

Reputation: 1965

pyspark on EMR, should spark.executor.pyspark.memory and executor.memory be set?

I'm going through an heavy re partitioning job (heavy because of the amount of data, not for what spark is doing) and I keep on running through various memory errors.

I'm completely new to spark and after few researches I finally found a way to have the execution running very fast, but if the table I'm trying to re partition is too big, I run through another memory error.

The way I configure it within EMR at the moment is the following, for 9 r3.8xlarge:

--executor-cores 11 --executor-memory 180G

My question is, should I set the --conf spark.executor.pyspark.memory as well? If yes, to which value? Should it be the same as the executor-memory?

I can't affirm the following, but I had a feeling that when putting both and to same value it was crashing with a java heap error (thus I assumed it was trying to provision too much RAM)

As asked on comment, the latest error I had from EMR is:

diagnostics: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1564757608574
     final status: FAILED
     tracking URL: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004
     user: hadoop
19/08/03 09:44:57 ERROR Client: Application diagnostics message: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1564657600123_0004 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1148)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1525)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/08/03 09:44:57 INFO ShutdownHookManager: Shutdown hook called
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-245f0132-a6e5-4a6d-874f-a71942b1636f
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-db12c471-c7d5-4c86-8cda-bf3246ffb860
Command exiting with ret '1'

Upvotes: 0

Views: 1653

Answers (1)

DennisLi
DennisLi

Reputation: 4156

try to increase your memory as possible, below is an config example in pyspark script. you can tune your memory just like that.

    conf = SparkConf()
    conf.set('spark.dynamicAllocation.enabled', 'false')
    conf.set('spark.yarn.am.memory', '4g')  # As the log showing, you need to increase your AM memory.
    conf.set('spark.yarn.am.cores', '2')
    conf.set('spark.executor.memoryOverhead', '1200')  # The amount of off-heap memory (in megabytes) to be allocated per executor. Overhead Memory is used by container itself.
    conf.set('spark.executor.memory', '2500m')  # memory * instances should less than Node total memory
    conf.set('spark.executor.cores', '4')  # --executor-cores
    conf.set('spark.executor.instances', '8')  # --num-executors

BTW, repartition is a heavy operator. you can use coalese without shuffle if neccessary.

Upvotes: 1

Related Questions