Reputation: 25
I installed and configured my hadoop cluster (2.6.0 version) and it works 100%, but every time I switch off my cluster I cannot access the data in HDFS.
Upvotes: 0
Views: 171
Reputation: 1133
dfs.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.data.dir: Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored
if you have not provided the above 2 parameter so by default it gets created under below parameter :
hadoop.tmp.dir which can be configured in core-site.xml
if you have not defined so by default it gets created in /tmp/hadoop-$username(hadoop) user .
Which i am assuming in your case is so whenever system gets shutdown /tmp directories content would be wiped out and hdfs won't be able to locate the metadata and actual data.
Upvotes: 1