Arvind Kumar
Arvind Kumar

Reputation: 1335

Namenode and Jobtracker information on Hadoop cluster

How can i get the following information on the Hadoop Cluster ? 1. namenode and jobtracker name 2. list of all nodes with their roles on the cluster

Upvotes: 3

Views: 5930

Answers (4)

Sagar Morakhia
Sagar Morakhia

Reputation: 797

To get namenode info:

    hdfs getconf -confKey fs.defaultFS  

For jobtracker

    hdfs getconf -confKey yarn.resourcemanager.address.rm2

Upvotes: 5

PradeepKumbhar
PradeepKumbhar

Reputation: 3421

Along with the command-line way of getting information, you can get the similar information in the browser also:

http://<namenode>:50070 (For in general hadoop informtion)
http://<namenode>:50030 (For JobTracker related information)

These are default ports. You can check here for more information.

Upvotes: 2

Alex Raj Kaliamoorthy
Alex Raj Kaliamoorthy

Reputation: 2095

I am using cloudera based cluster and also working on EMR. In both the clusters I can find the information from the configuration dir. To get the namenode information go into core-site.xml file and look for the fs.defaultFS as @daemon12 said

Here is the straight way to get it. For namenode information use the below command cat /etc/hadoop/conf/core-site.xml | grep '8020'

Here is the result

<value>hdfs://10.872.22.1:8020</value> The values inside the value tag is the name node information.

Similarly to get the jobtracker information do the below

cat /etc/hadoop/conf/yarn-site.xml | grep '8032'

Here is the result

<value>10.872.12.32:8032</value>

Again the jobtracker value is inside the value tag.

Generally the NN and JT information is used to run the Oozie jobs and this method will help you for that purpose.

DISCLAIMER: I am grepping the result of cat based on the namenode and jobtracker port number which is 8020 and 8032 respectively. This is widely known ports for NN and JT in Hadoop. If your organization uses a different one, please use that to get more appropriate result.

Upvotes: 2

tokiloutok
tokiloutok

Reputation: 467

With the correct granted authorization, (like sudo -u hdfs ), you may try :

hdfs dfsadmin -report

Upvotes: 1

Related Questions