Tweedy
Tweedy

Reputation: 51

Hadoop how to allocate more memory per node

I have a Hadoop cluster running on 2 nodes(master and slave) each of which have 126GB RAM and 32 CPU. when I run my cluster i am only able to see 8 GB memory per node. How do I increase this? What would be the optimal memory to be allocated per node and how to do it?

Upvotes: 2

Views: 4014

Answers (2)

fdeslaur
fdeslaur

Reputation: 154

You might have to tell Hadoop what parameters to use when launching the JVMs or else it will use your Java implementation default values.

In your mapred-site.xml, you can add this mapred.child.java.opts field to specify the memory size to use for the JVMs.

<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx16000m</value>
</property>

Where 16000 is the number of MB you want to allocate to each JVM.

I hope it helps!

Source

Upvotes: 1

Lester Martin
Lester Martin

Reputation: 331

This blog post will give you a ton of help; http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

Upvotes: 2

Related Questions