user1219626
user1219626

Reputation:

Connection refused when i run hive select query

I am having trouble in running hive select query. I have created DB(mydb) in hive, as soon as i run query on mydb Tables, it gives me below error.

Failed with exception java.io.IOException:java.net.ConnectException: Call From oodles-   Latitude-3540/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Configuration for my core-site.xml file of hadoop is show below

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.0.114:9000</value>
  <description>The name of the default file system. A URI whose
  scheme and authority determine the FileSystem implementation.
  </description>
</property>
</configuration>

And Configuration for my mapred-site.xml.template file is

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>192.168.0.114:8021</value>
        <description>The host and port that the MapReduce job tracker runsat.</description>
    </property>
</configuration>

if i change the host name in both the file from 192.168.0.114 to localhost then hive query is working fine but not working with 192.168.0.114

why hive always points to localhost:9000 , cant we change it to point at my preferred location(192.168.0.114:9000)? How can i fix hive select query to show me result with above configuration of hadoop conf files? Hope u friends got my question clearly!

Upvotes: 1

Views: 6523

Answers (2)

user1219626
user1219626

Reputation:

i catch the problem which was causing this error

Failed with exception java.io.IOException:java.net.ConnectException: Call From oodles-   Latitude-3540/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

By default hive create tables according to your configuration of namenode i.e

hdfs://localhost:9000/user/hive/warehouse.

later if you change the configuration of namenode i.e if you change fs.default.name property to

hdfs://hostname:9000 

in core-site.xml also in hive-site.xml and try to access table by executing select query it means you are trying to search at previous location i.e hdfs://localhost:9000/user/hive/warehouse to which your namenode is not connected. Currently your namenode is connected to hdfs://hostname:9000 that’s why it gives you

Call From oodles-Latitude-3540/127.0.1.1 to localhost:9000 failed on connection exception

In my case i changed my hive-site.xml file like this

<configuration>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>/new/user/hive/warehouse</value>
  <description>location of default database for the warehouse</description>
</property>
<property>
  <name>hadoop.embedded.local.mode</name>
  <value>false</value>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://hostname:9000</value>
</property>
<property>
  <name>mapred.job.tracker</name>
  <value>hostname:8021</value>
</property>
</configuration>

now when hive creates table it will choose value of fs.default.name from here. (hostname is my ip which i mention in hosts file in /etc/hosts like shown below)

127.0.0.1    localhost
127.0.1.1    oodles-Latitude-3540
192.168.0.113   hostname

My core-site.xml file

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://hostname:9000</value>
  <description>The name of the default file system. A URI whose
  scheme and authority determine the FileSystem implementation.
  </description>
</property>
</configuration>

My Mapred-site.xml file

<configuration>
<property
    <name>mapred.job.tracker</name>
    <value>hostname:8021</value>
</property>
</configuration>

Now if your table location is same as your namenode i.e

fs.default.name = hdfs://hostname:9000

then it will give you no error.

You can check the location of your table by executing this query -->

show create table <table name>

Any query . Feel free to Ask!

Upvotes: 4

K S Nidhin
K S Nidhin

Reputation: 2650

Since its working fine with localhost.I would suggest adding your IP address in the /etc/hosts file. define all the cluster nodes DNS name as well as its IP .

Hope this helps .

--UPDATE--

A sample hosts mapping:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.7.192.56 hostname

Upvotes: 0

Related Questions