leon
leon

Reputation: 10395

Hadoop namenode metadata

I am a bit confused by the Hadoop architecture.

  1. What kind of file metadata is stored in Hadoop Namenode? From Hadoop wiki, it says Namenode stores the entire system namespace. Does information like last modified time, created time, file size, owner, permissions and etc stored in Namenode?

  2. Does datanode store any metadata information?

  3. There is only one Namenode, can the metadata data exceed the server's limit?

  4. If a user wants to download a file from Hadoop, does he have to download it from the Namenode? I found the below architecure picture from web, it shows a client can direct write data to datanode? Is it true? enter image description here

Thanks!!!!!!!

Upvotes: 3

Views: 13461

Answers (6)

Nandakishore
Nandakishore

Reputation: 1011

  1. Yes, NameNode manages these. Also frequently this data will be saved in fsimage and edits files which will be on local disk.

  2. No, all the metadata will be maintained by NameNode. Because of which the datanode burden will be less to maintain the metadata.

  3. There will be only one primary NameNode. As I said to manage the limit of metadata size, the data will be frequently saved in fsimage and edits through checkpointing.

  4. Client can contact the DataNode once it gets the file information from NameNode.

Upvotes: 0

Jing Wang
Jing Wang

Reputation: 50

For question number 4. Client does write data directly to Datanode. However, before it can write to a DataNode, it needs to talk to the Namenode to obtain metatdata like which Datanode and which block to write to.

Upvotes: 0

hari_sree
hari_sree

Reputation: 1538

3) When the no.of files are so huge , a single Namenode will not be able to keep all the metadata . In fact that is one of the limitations of HDFS . You can check HDFS Federation which aims to address this problem by splitting into different namespaces served by different namenodes .

4)

Read process :    
a) Client first gets the datanodes where the actual data is located from the namenode 
b) Then it directly contacts the datanodes to read the data

Write process : 
a) Client asks namenode for some datanodes to write the data and if available Namenode gives them 
b)Client goes directly to the datanodes and write

Upvotes: 1

Praveen Sripati
Praveen Sripati

Reputation: 33495

  1. The fsimage on the name node is in a binary format. Use the "Offline Image Viewer" to dump the fsimage in a human-readable format. The output of this tool can be further analyzed with pig or some other tool to get more meaningful data.

http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfs_imageviewer.html

Upvotes: 2

David Gruzman
David Gruzman

Reputation: 8088

I think the following explanation can help you to better understand the HDFS architecture. You can consider Name node to be like FAT (file allocation table) + Directory data and Data nodes to be dumb block devices. When you want to read the file from the regular file system, you should go to Directory, then go to FAT, get locations of all relevant blocks and read them. The same happens with HDFS. When you want to read the file, you go to the Namenode, get the list blocks the given file have. This information about blocks will contain list of datanodes where this information sitting. After it you go to the datanode and get relevant blocks from them.

Upvotes: 7

johndodo
johndodo

Reputation: 18271

  1. yes
  2. no, apart from the blocks themselves
  3. yes, if you have many small files
  4. no, the info about the file is on the Namenode, the file itself is on Datanodes (a datanode could in theory be on the same machine, and often is on smaller clusters)

Upvotes: 1

Related Questions