saurabh shashank
saurabh shashank

Reputation: 1353

Spark Job Fails on yarn with memory error

My spark job fails with following error : Diagnostics: Container [pid=7277,containerID=container_1528934459854_1736_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 3.1 GB of 6.9 GB virtual memory used. Killing container.

Upvotes: 0

Views: 153

Answers (1)

Abhinav
Abhinav

Reputation: 666

Your containers are getting killed. This happens when your Yarn memory is not as much as required to perform the task. So, the possible solution is to increase Yarn memory.

You have 2 choices:

  1. Either increase the current memory size of your node manager
  2. Or assign a new Node manager on one more Datanode.

It will increase the Yarn Memory and make sure it's around 2 GB at least.

Upvotes: 0

Related Questions