Reputation: 10395
I am a bit confused by the Hadoop architecture.
What kind of file metadata is stored in Hadoop Namenode? From Hadoop wiki, it says Namenode stores the entire system namespace. Does information like last modified time, created time, file size, owner, permissions and etc stored in Namenode?
Does datanode store any metadata information?
There is only one Namenode, can the metadata data exceed the server's limit?
If a user wants to download a file from Hadoop, does he have to download it from the Namenode? I found the below architecure picture from web, it shows a client can direct write data to datanode? Is it true?
Thanks!!!!!!!
Upvotes: 3
Views: 13461
Reputation: 1011
Yes, NameNode manages these. Also frequently this data will be saved in fsimage and edits files which will be on local disk.
No, all the metadata will be maintained by NameNode. Because of which the datanode burden will be less to maintain the metadata.
There will be only one primary NameNode. As I said to manage the limit of metadata size, the data will be frequently saved in fsimage and edits through checkpointing.
Client can contact the DataNode once it gets the file information from NameNode.
Upvotes: 0
Reputation: 50
For question number 4. Client does write data directly to Datanode. However, before it can write to a DataNode, it needs to talk to the Namenode to obtain metatdata like which Datanode and which block to write to.
Upvotes: 0
Reputation: 1538
3) When the no.of files are so huge , a single Namenode will not be able to keep all the metadata . In fact that is one of the limitations of HDFS . You can check HDFS Federation which aims to address this problem by splitting into different namespaces served by different namenodes .
4)
Read process :
a) Client first gets the datanodes where the actual data is located from the namenode
b) Then it directly contacts the datanodes to read the data
Write process :
a) Client asks namenode for some datanodes to write the data and if available Namenode gives them
b)Client goes directly to the datanodes and write
Upvotes: 1
Reputation: 33495
http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfs_imageviewer.html
Upvotes: 2
Reputation: 8088
I think the following explanation can help you to better understand the HDFS architecture. You can consider Name node to be like FAT (file allocation table) + Directory data and Data nodes to be dumb block devices. When you want to read the file from the regular file system, you should go to Directory, then go to FAT, get locations of all relevant blocks and read them. The same happens with HDFS. When you want to read the file, you go to the Namenode, get the list blocks the given file have. This information about blocks will contain list of datanodes where this information sitting. After it you go to the datanode and get relevant blocks from them.
Upvotes: 7
Reputation: 18271
Upvotes: 1