samst
samst

Reputation: 556

Zeppelin Interpreter Memory - driver memory

Im unsuccessfully trying to increase the driver memory for my spark interpreter. I just set spark.driver.memory in interpreter settings and everything looks great at first. But in the docker container that zeppelin runs there is

Zeppelin 0.6.2 Spark 2.0.1

2:06 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/zeppelin/int.....-2.7.2/share/hadoop/tools/lib/* -Xmx1g ..... --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer /usr/zeppelin/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar 42651

a max heap setting that kind of breaks everything. My main issue is I am trying to run the Latent Dirchilet Allocation of mllib and it always runs out of memory and just dies on the driver. The docker container has 26g RAM now so that should be enough. Zeppelin itself should be fine with its 1g ram. But the spark driver simply needs more.

My Executor process have RAM but the driver is reported in the UI as

Executor ID Address Status RDD Blocks Storage Memory Disk Used Cores Active Tasks Failed Tasks Complete Tasks Total Tasks Task Time (GC Time) Input Shuffle Read Shuffle Write Thread Dump driver 172.17.0.6:40439 Active 0 0.0 B / 404.7 MB 0.0 B 20 0 0 1 1 1.4 s (0 ms) 0.0 B 0.0 B 0.0 B Thread Dump

pretty abysmal

Setting ZEPPELIN_INTP_MEM='-Xms512m -Xmx12g' does not seem to change anything. I though zeppelin-env.sh is not loaded correctly so I passed this variable directly in the docker create -e ZE... but that did not change anything.

SPARK_HOME is set and the it connects to a standalone spark cluster. But that part works. Only the driver runs out of memory.

But I tried starting a local[*] process with 8g driver memory and 6g executor but the same abysmal 450mb driver memory.

the intrepreter reports a java heap out of memory error and that breaks that halts the LDAModel training.

Upvotes: 2

Views: 5041

Answers (2)

zjffdu
zjffdu

Reputation: 28744

https://issues.apache.org/jira/browse/ZEPPELIN-1263 fix this issue. After that you can use whatever standard spark configuration. e.g. you can specify driver memeory via setting spark.driver.memory in spark interpreter setting.

Upvotes: -1

Ben Peters
Ben Peters

Reputation: 1

Just came across this in a search while running into the exact same problem! Hopefully you've found a solution by now, but just in case anyone else runs across this issue and is looking for a solution like me, here's the issue:

The process you're looking at here isn't considered an interpreter process by Zeppelin, it's actually a Spark Driver process. This means that it gets options set differently than the ZEPPELIN_INTP_MEM variable. Add this to your zeppelin-env.sh:

export SPARK_SUBMIT_OPTIONS="--driver-memory 12G"

Restart Zeppelin and you should be all set! (tested and works with the latest 0.7.3, assuming it works with earlier versions).

Upvotes: 0

Related Questions