Reputation: 3798
I have a problem with my Ambari server, it is not able to start the Namenode. I'm using HDP 2.0.6, Ambari 1.4.1. It is worth to mention this is happening once I've enabled the Kerberos security, I mean, when it is disabled there is no error.
The error is:
2015-02-04 16:01:48,680 ERROR namenode.EditLogInputStream (EditLogFileInputStream.java:nextOpImpl(173)) - caught exception initializing http://int-iot-hadoop-fe-02.novalocal:8480/getJournal?jid=integration&segmentTxId=1&storageInfo=-47%3A1493795199%3A0%3ACID-a5152e6c-64ab-4978-9f1c-e4613a09454d
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://int-iot-hadoop-fe-02.novalocal:8480/getJournal?jid=integration&segmentTxId=1&storageInfo=-47%3A1493795199%3A0%3ACID-a5152e6c-64ab-4978-9f1c-e4613a09454d failed with status code 500
Response message:
getedit failed. java.lang.IllegalArgumentException: Does not contain a valid host:port authority: null at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:210) at org.apache.hadoop.hdfs.qjournal.server.GetJournalEditServlet.isValidRequestor(GetJournalEditServlet.java:93) at org.apache.hadoop.hdfs.qjournal.server.GetJournalEditServlet.checkRequestorOrSendError(GetJournalEditServlet.java:128) at org.apache.hadoop.hdfs.qjournal.server.GetJournalEditServlet.doGet(GetJournalEditServlet.java:174) at
...
It seems the problem is about retrieving the Secondary Namenode http address, which in fact is set to null in hdfs-site-xml
(I do not know why):
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>null</value>
</property>
I've tried to set that parameter's value to the appropriate one, but nothing works:
hdfs-site.xml
files and running hdfs namenode
, but nothing occurs.hdfs-site.xml
files and starting the whole HDFS from Ambari, but nothing occurs. Even, the dfs.namenode.secondary.http-address
parameter is set to null again!hdfs-site.xml
list > add new property... the problem is that dfs.namenode.secondary.http-address
is not listed by the UI does not allow me to add it because it says... it is already existing! :)I've also noted that a site-XXXX.pp
file is created under /var/lib/ambari-agent/data/
each time the HDFS service is restarted from the Amabri UI, and I've found each one of these files has:
[root@int-iot-hadoop-fe-02 ~]# cat /var/lib/ambari-agent/data/site-3228.pp | grep dfs.namenode.secondary.http-address
"dfs.namenode.secondary.http-address" => 'null',
I think other candidate file for configuring this property could be /var/lib/ambari-agent/puppet/modules/hdp-hadoop/manifests/params.pp
. There is a ### hdfs-site
section, but I'm not able to figure out which is the name of the puppet variable associated to the dfs.namenode.secondary.http-address
property.
Any ideas? Thanks!
Upvotes: 3
Views: 6140
Reputation: 3798
Partially fixed: it is necessary to stop all the HDFS services (Journal Node, Namenodes and Datanodes) before editing the hdfs-site.xml
file. Then, of course, Ambari "start button" cannot be used because the configuration would be smashed... thus it is necessary to re-start all the services manually. This is not the definitive solution since it is desirable this changes of configuration could be done from Ambari UI...
Upvotes: 0
Reputation: 36
I have a workaround to make it work under ambari environment:
In the ambari node modify:
changing from:
{
"name": "dfs.namenode.secondary.http-address",
"templateName": ["snamenode_host"],
"foreignKey": null,
"value": "<templateName[0]>:50090",
"filename": "hdfs-site.xml"
},
to the specific value for your secondary namenode and not the template one:
{
"name": "dfs.namenode.secondary.http-address",
"templateName": ["snamenode_host"],
"foreignKey": null,
"value": "my.secondary.namenode.domain:50090",
"filename": "hdfs-site.xml"
},
rename /usr/lib/ambari-server/web/javascripts/app.js.gz to /usr/lib/ambari-server/web/javascripts/app.js.gz.old
gzip the app.js so a new app.js.gz is generated in the same directory
Refresh your ambari web and force an HDFS restart, this will regenerate the appropiate /etc/hadoop/conf/hdfs-site.xml, if it does not, you coud add in the ambari web a new property and then delete it in order to force the changes when you press the save button.
Hope this helps.
--mLG
Upvotes: 2