Abhishek Sakhuja
Abhishek Sakhuja

Reputation: 222

Process to restart Namenode in IBM BigInsights (enabled GPFS - a transparency layer of HDFS)

I am working on IBM Hadoop distribution (BigInsights) which has been installed using Apache Ambari and currently, has GPFS (general parallel file system) enabled as a transparency layer of HDFS. On Ambari, we have enabled maintenance mode on HDFS and making any changes to core/hdfs-site.xml is not possible through Ambari console. So, if I want to make any changes to core/hdfs-site.xml, I have to make them from server side using CLI then how I must restart my namenode/datanode in GPFS environment? Do I need restart the connector which will enable new parameters or restarting namenode? If connector works then I do have the command "mmhadoopctl" but if not, what is command I must use to enable new parameters placed inside the configuration file.

Upvotes: 0

Views: 235

Answers (2)

Daniel Kidger
Daniel Kidger

Reputation: 1

Spectrum Scale (GPFS) provides its own namenode service (and datanode services too). This though is only a wrapper over the underlying Spectrum Scale filesystem and Spectrum Scale metadata. The NameNode service is stateless, as for example all information about the files, ACLs and so on is kept in Spectrum Scale (and can be seen from the command line using POSIX and Spectrum Scale command-line tools.

/usr/lpp/mmfs/hadoop/sbin/mmhadoopctl connector stop

/usr/lpp/mmfs/hadoop/sbin/mmhadoopctl connector start

/usr/lpp/mmfs/hadoop/sbin/mmhadoopctl connector getstate

ie do it using GPFS commands, not the generic Hadoop NameNode service

Upvotes: 0

Weiwei Yang
Weiwei Yang

Reputation: 19141

If the underneath file system is GPFS (non-HDFS), why it still has namenode and datanodes running? I would suspect GPFS has separate configuration files and won't be aware whatever you have set in hdfs-site.xml.

Irrespectively, restarting namenode is pretty simple, log on as hdfs user and run hadoop-daemon.sh stop namenode then hadoop-daemon.sh stop namenode, hadoop-daemon.sh script is under the sbin of HADOOP_HOME.

Upvotes: 0

Related Questions