nanounanue
nanounanue

Reputation: 8342

hadoop fs commands are showing the local filesystem not the hdfs

I installed hadoop in several laptops in order to form a hadoop cluster. First we installed in pseudo-distributed mode, and in all except one verything was perfect (i.e. all the services run, and when I do tests with hadoop fs it shows the hdfs). In the aftermentioned laptop (the one with problems) the `hadoop fs -lscommand shows the information of the local directory not the hdfs, the same happens with the commands -cat, -mkdir, -put. What could I be doing wrong?

Any help would be appreciated

Here is my core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
 <name>hadoop.tmp.dir</name>
 <value>/home/hduser/hdfs_dir/tmp</value>
 <description></description>
</property>

<property>
 <name>fs.default.name</name>
 <value>hdfs://localhost:54310</value>
 <description>.</description>
</property>
</configuration>

I must said, that this is the same file for all the other laptops, and they work fine.

Upvotes: 8

Views: 7741

Answers (3)

Sandeep Kumar
Sandeep Kumar

Reputation: 288

If fs.default.name in core-site.xml points to hdfs://localhost:54310/ with or without trailing / and even if you have same problem then you might be looking at wrong config file. In my case it is cloudera's cdh4 and check the symbolic links:

ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
**
/etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.yarn1

Earlier I used MRv1 and migrated to MRv2 (YARN) and the sym links were broken after upgrade as:

ls -l /etc/hadoop/conf
** /etc/hadoop/conf -> /etc/alternatives/hadoop-conf
ls -l /etc/alternatives/hadoop-conf
**
/etc/alternatives/hadoop-conf -> /etc/hadoop/conf.cloudera.mapreduce1
ls -l /etc/hadoop/conf.cloudera.mapreduce1
ls: cannot access /etc/hadoop/conf.cloudera.mapreduce1: No such file or directory

Also, update-alternatives was run to have high priority for /etc/hadoop/conf.cloudera.mapreduce1 path as:

alternatives --display hadoop-conf
hadoop-conf - status is manual.
link currently points to /etc/hadoop/conf.cloudera.mapreduce1
/etc/hadoop/conf.cloudera.hdfs1 - priority 90
/etc/hadoop/conf.cloudera.mapreduce1 - priority 92
/etc/hadoop/conf.empty - priority 10
/etc/hadoop/conf.cloudera.yarn1 - priority 91
Current `best' version is /etc/hadoop/conf.cloudera.mapreduce1.

To remove old link which has highest priority do:

update-alternatives --remove hadoop-conf /etc/hadoop/conf.cloudera.mapreduce1
rm -f /etc/alternatives/hadoop-conf
ln -s /etc/hadoop/conf.cloudera.yarn1 /etc/alternatives/hadoop-conf

Upvotes: 2

rampion
rampion

Reputation: 89053

I had the same problem, and I had to make sure fs.default.name's value included a trailing / to refer to the path component:

<property>
 <name>fs.default.name</name>
 <value>hdfs://localhost:54310/</value>
 <description>.</description>
</property>

Upvotes: 6

Ion Cojocaru
Ion Cojocaru

Reputation: 2583

check that fs.default.name in core-site.xml points to the correct datanode in ex:

<property>
     <name>fs.default.name</name>
     <value>hdfs://target-namenode:54310</value>
</property>

Upvotes: 4

Related Questions