Prem Anand
Prem Anand

Reputation: 461

File system in hadoop

Recently I have started to learn hadoop multi cluster

How the file system work's in hadoop.

For eg. If i have one test.txt file in fully of farmer details. Whether I need to upload the file in master HDFS server or in slave server.

Upvotes: 0

Views: 255

Answers (2)

Roger.Liu
Roger.Liu

Reputation: 21

first, you need to upload the file:text.txt to one node of the cluster(can be master server or slave server), eg, upload to /tmp foler and you got: /tmp/text.txt, then use the command:

# hadoop fs -put /tmp/text.txt /tmp

then use command:

# hadoop fs -ls /tmp

you will find the file:text.txt has already in that hdfs folder:/tmp

Upvotes: 2

Farooque
Farooque

Reputation: 3796

To know how the Hadoop filesystem works, please refer to book Hadoop: The Definitive Guide

For time being, to load a file into HDFS, you only need to run the -put or -copyFromLocal command from edgenode of the cluster(meaning from where you can run hadoop command) and rest will be taken care by Hadoop framework. Your command may look like

hadoop fs set

$ hadoop fs -copyFromLocal /home/user1/farmer_details.txt /user/user1

or

$ hadoop fs -put /home/user1/farmer_details.txt /user/user1

You can also try hdfs dfs set

$ hdfs dfs -copyFromLocal /home/user1/farmer_details.txt /user/user1

or

$ hdfs dfs -put /home/user1/farmer_details.txt /user/user1

where /home/user1/farmer_details.txt is the source location in your local file system and /user/user1 is the destination location in hadoop filesystem.

To verify the uploaded file, you can run command

$ hadoop fs -ls /user/user1

You should get /user/user1/farmer_details.txt file.

Upvotes: 2

Related Questions