Reputation: 81
I am trying to configure apache hive server2
My configuration file in hive-site.xml
<configuration>
<property>
<name>hive.server2.thrift.min.worker.threads</name>
<value>5</value>
<description>Minimum number of worker threads</description>
</property>
<property>
<name>hive.server2.thrift.max.worker.threads</name>
<value>500</value>
<description>Maximum number of worker threads</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
<description>TCP port number to listen on</description>
</property>
<property>
<name>hiver.server2.thrift.bind.host</name>
<value>10.89.20.22</value>
<description>TCP interface to bind to</description>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
<description>Set to http to enable HTTP transport mode</description>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
<description>HTTP port number to listen on</description>
</property>
<property>
<name>hive.server2.thrift.http.max.worker.threads</name>
<value>500</value>
<description>Maximum worker threads in the server pool</description>
</property>
<property>
<name>hive.server2.thrift.http.min.worker.threads</name>
<value>5</value>
<description>Minimum worker threads in the server pool</description>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
<description>The service endpoint</description>
</property>
The error that I am receiving.
I dont know what is my error about. Can someone help me to configure this? Thank you so much
Upvotes: 1
Views: 1385
Reputation: 181
There are no actual error in that screenshot, that information is informational only.
You will also notice in the log these messages are INFO lines which are just informational messages only. If you had any actual errors you might see things like ERROR, FATAL, or even WARN is good to watch out for.
Those properties that are saying deprecated look to be properties from your hadoop site xml configuration files such as hive-site.xml that are no longer used any longer. Hadoop will just ignore those properties since they aren't used anymore. If you remove those properties from their respective configuration xml files these messages should stop if the property is removed from the xml configuration the cluster is looking at. You posted at least a portion of your hive-site.xml, it doesn't look complete but these also may not be in that xml anyway. The hadoop cluster has a number of configuration files, generally at least 1 for each service running on the cluster. It might be another such as core-site.xml, mapred-site.xml, or other xml files which are on each node running the service.
The are INFO messages about the SJF bindings being duplicated in the classpath are likely due to having a duplicate jar file somewhere. There are a few services such as yarn/mapreduce that have a property in their xml for a classpath such as mapreduce.application.classpath which has a list of multiple folders on the nodes operating system that contain jar files the customer uses to run. These messages appear when a node has two jar files that contain an identical class inside the jar file.
The most common thing that happens is you install an update to your hadoop cluster, or a specific service to a new version. Most, if not all, of these hadoop services execute jar files to run their jobs. If you upgrade the cluster, or a service you get new jar files which usually increments the version number in the name of the jar file. For example it will change from test1.1.jar to test1.2.jar. If both the new, and the old are jar file are left on the cluster in the classpath property you will get a classpath confict/warning. Essentially what happens is you now have two jar files with different name yet both have identical classpaths inside them causing these types of messages about classpath having duplicates in it.
In that classpath message it also shows you the duplicate jar files both with the name of this INFO line SJF*. Most likely if you remove the older version from any of the nodes it is still on it should stop telling you about it when it starts up.
It sounds like you had an existing hadoop cluster that you applied an update to which upgraded service to a new version. When you did this whatever version(s) you updated to no longer uses some of the properties the original version used for some reason, which is why it is telling you they are deprecated, or no longer used. You can just remove these from the xml configuration they are in on any node that has them. Then it sounds like your SJF jar file was replaced with the latest version, but the original version wasn't removed at at least somewhere during the upgrade process.
Basically hadoop will keep going without error, but it wants you to clean up after your mess upgrading is what those INFO lines are really trying to tell you.
Upvotes: 2