Alexey Bakulin
Alexey Bakulin

Reputation: 1369

AWS Glue executor memory limit

I found that AWS Glue set up executor's instance with memory limit to 5 Gb --conf spark.executor.memory=5g and some times, on a big datasets it fails with java.lang.OutOfMemoryError. The same is for driver instance --spark.driver.memory=5g. Is there any option to increase this value?

Upvotes: 26

Views: 52320

Answers (6)

Tanmoy Dasgupta
Tanmoy Dasgupta

Reputation: 21

You can use Glue G.1X and G.2X worker types which give more memory and disk space to scale Glue jobs that need high memory and throughput. Also you can edit Glue job and set --conf value spark.yarn.executor.memoryOverhead=1024 or 2048 and spark.driver.memory=10g

Upvotes: 0

xtreampb
xtreampb

Reputation: 562

despite aws documentation stating that the --conf parameter should not be passed, our AWS support team told us to pass --conf spark.driver.memory=10g which corrected the issue we were having

Upvotes: 18

Zambonilli
Zambonilli

Reputation: 4591

I hit out of memory errors like this when I had a highly skewed dataset. In my case, I had a bucket of json files that contained dynamic payloads that were different based on the event type indicated in the json. I kept hitting Out of Memory errors no matter if I used the configuration flags indicated here and increased the DPUs. It turns out that my events were highly skewed to a couple of the event types being > 90% of the total data set. Once I added a "salt" to the event types and broke up the highly skewed data I did not hit any out of memory errors.

Here's a blog post for AWS EMR that talks about the same Out of Memory error with highly skewed data. https://medium.com/thron-tech/optimising-spark-rdd-pipelines-679b41362a8a

Upvotes: 2

ashutosh singh
ashutosh singh

Reputation: 185

  1. Open Glue> Jobs > Edit your Job> Script libraries and job parameters (optional) > Job parameters near the bottom
  2. Set the following: key: --conf value: spark.yarn.executor.memoryOverhead=1024 spark.driver.memory=10g

Upvotes: 10

cozyss
cozyss

Reputation: 1388

The official glue documentation suggests that glue doesn't support custom spark config.

There are also several argument names used by AWS Glue internally that you should never set:

--conf — Internal to AWS Glue. Do not set!

--debug — Internal to AWS Glue. Do not set!

--mode — Internal to AWS Glue. Do not set!

--JOB_NAME — Internal to AWS Glue. Do not set!

Any better suggestion on solving this problem?

Upvotes: 5

Kris Bravo
Kris Bravo

Reputation: 161

You can override the parameters by editing the job and adding job parameters. The key and value I used are here:

Key: --conf

Value: spark.yarn.executor.memoryOverhead=7g

This seemed counterintuitive since the setting key is actually in the value, but it was recognized. So if you're attempting to set spark.yarn.executor.memory the following parameter would be appropriate:

Key: --conf

Value: spark.yarn.executor.memory=7g

Upvotes: 13

Related Questions