user3836278
user3836278

Reputation: 11

Hadoop: Data Node are not starting

I have installed Hadoop version 2.2 on a Centos 6.5 system but when i use the command start-dfs.sh. My datanode are not starting in my master and slave PC. I am attaching my log for datanode.

<i>
2014-07-14 17:22:07,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ollh/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_60
************************************************************/
2014-07-14 17:22:07,806 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-07-14 17:22:08,617 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-07-14 17:22:09,260 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-07-14 17:22:09,368 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-07-14 17:22:09,368 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-07-14 17:22:09,373 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
2014-07-14 17:22:09,416 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-07-14 17:22:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-07-14 17:22:09,615 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-07-14 17:22:09,705 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-07-14 17:22:09,709 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-07-14 17:22:09,709 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-07-14 17:22:09,710 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-07-14 17:22:09,716 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-07-14 17:22:09,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-07-14 17:22:09,721 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-07-14 17:22:09,722 INFO org.mortbay.log: jetty-6.1.26
2014-07-14 17:22:10,241 INFO org.mortbay.log: Started [email protected]:50075
2014-07-14 17:22:10,690 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-07-14 17:22:10,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-07-14 17:22:10,747 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-07-14 17:22:10,780 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2014-07-14 17:22:10,796 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to master/192.168.1.122:9000 starting to offer service
2014-07-14 17:22:10,804 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-07-14 17:22:10,807 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-07-14 17:22:11,658 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hduser/mydata/hdfs/dfs/data/in_use.lock acquired by nodename 20870@ollh
2014-07-14 17:22:11,674 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-273134468-127.0.0.1-1405329708027 (storage id DS-803443442-127.0.0.1-50010-1405328424841) service to master/192.168.1.122:9000
java.io.IOException: Incompatible clusterIDs in /home/hduser/mydata/hdfs/dfs/data: namenode clusterID = CID-fbd216d7-06f4-44b5-a6b6-ff2f4d5e677f; datanode clusterID = CID-c5ce9a14-d391-48fe-9c7f-bd9af5b9cd5e
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
    at java.lang.Thread.run(Thread.java:745)
2014-07-14 17:22:11,677 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-273134468-127.0.0.1-1405329708027 (storage id DS-803443442-127.0.0.1-50010-1405328424841) service to master/192.168.1.122:9000
2014-07-14 17:22:11,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-273134468-127.0.0.1-1405329708027 (storage id DS-803443442-127.0.0.1-50010-1405328424841)
2014-07-14 17:22:13,798 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-14 17:22:13,800 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-14 17:22:13,802 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ollh/127.0.0.1
************************************************************/
  </i>

strong text This is how i implemented start-dfs.sh and start-yarn.sh

[hduser@ollh hadoop]$ start-dfs.sh
14/07/14 17:22:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-ollh.out
slave1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-ollcf.out
master: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-ollh.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-ollh.out
14/07/14 17:22:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hduser@ollh hadoop]$ jps
11509 JobHistoryServer
21228 Jps
20768 NameNode
21059 SecondaryNameNode
[hduser@ollh hadoop]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-ollh.out
slave1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-ollcf.out
master: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-ollh.out
[hduser@ollh hadoop]$ jps
11509 JobHistoryServer
20768 NameNode
21059 SecondaryNameNode
21395 NodeManager
21290 ResourceManager
21431 Jps

The current information about master and slave IP addresses

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1      localhost localhost.localdomain localhost6 localhost6.localdomain6 ollh
192.168.1.109   slave1
192.168.1.122   master

Hadoop Namenode -format gives me this

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_60
************************************************************/
14/07/14 17:39:27 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/07/14 17:39:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-1879875b-52b4-4c34-87c0-709c45b37a63
14/07/14 17:39:28 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/14 17:39:28 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/14 17:39:28 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/14 17:39:28 INFO util.GSet: Computing capacity for map BlocksMap
14/07/14 17:39:28 INFO util.GSet: VM type       = 64-bit
14/07/14 17:39:28 INFO util.GSet: 2.0% max memory = 889 MB
14/07/14 17:39:28 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/07/14 17:39:28 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/14 17:39:28 INFO blockmanagement.BlockManager: defaultReplication         = 2
14/07/14 17:39:28 INFO blockmanagement.BlockManager: maxReplication             = 512
14/07/14 17:39:28 INFO blockmanagement.BlockManager: minReplication             = 1
14/07/14 17:39:28 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/07/14 17:39:28 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/07/14 17:39:28 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/14 17:39:28 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/07/14 17:39:28 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
14/07/14 17:39:28 INFO namenode.FSNamesystem: supergroup          = supergroup
14/07/14 17:39:28 INFO namenode.FSNamesystem: isPermissionEnabled = false
14/07/14 17:39:28 INFO namenode.FSNamesystem: HA Enabled: false
14/07/14 17:39:28 INFO namenode.FSNamesystem: Append Enabled: true
14/07/14 17:39:29 INFO util.GSet: Computing capacity for map INodeMap
14/07/14 17:39:29 INFO util.GSet: VM type       = 64-bit
14/07/14 17:39:29 INFO util.GSet: 1.0% max memory = 889 MB
14/07/14 17:39:29 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/07/14 17:39:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/14 17:39:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/14 17:39:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/14 17:39:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/07/14 17:39:29 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/14 17:39:29 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/14 17:39:29 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/07/14 17:39:29 INFO util.GSet: VM type       = 64-bit
14/07/14 17:39:29 INFO util.GSet: 0.029999999329447746% max memory = 889 MB
14/07/14 17:39:29 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/hduser/mydata/hdfs/dfs/name ? (Y or N) Y
14/07/14 17:39:31 INFO common.Storage: Storage directory /home/hduser/mydata/hdfs/dfs/name has been successfully formatted.
14/07/14 17:39:31 INFO namenode.FSImage: Saving image file /home/hduser/mydata/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/07/14 17:39:31 INFO namenode.FSImage: Image file /home/hduser/mydata/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
14/07/14 17:39:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/07/14 17:39:32 INFO util.ExitUtil: Exiting with status 0
14/07/14 17:39:32 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ollh/127.0.0.1
************************************************************/

Upvotes: 1

Views: 3635

Answers (5)

Vignesh Menon
Vignesh Menon

Reputation: 9

I had faced similar issue within my local VM wherein name node wasn't starting. There was a valid entry in core-site.xml but still when ./start-all.sh was run all other daemons except namenode would not start. To get around this I formatted the name node again.

Command: hadoop namenode -format

Then run the ./start-all.sh file under bin folder. Type in jps command to see if NameNode is up along with other daemons like DataNode, TaskTracker, JobTracker, SecondaryNameNode

If you datanode is not starting, I would check the conf file called 'slaves' to see if entries are proper.

Note: This was an initial set up stage, hence no data was present in the HDFS.

Upvotes: 0

Code wrangler
Code wrangler

Reputation: 134

Follow these steps :

  1. Shutdown cluster
  2. Manually remove the directory from the hadoop.tmp.dir location.
  3. Format cluster.
  4. Restart.

Let me know if the issue still persists.

Best Regards.

Upvotes: 1

jagadeesh
jagadeesh

Reputation: 1

check the namespace id in the version for both namenode and datanode. Those should be same. If not please copy namenode namespaceid and replace namespaceid of datanode. and start. it will solve your problem.

Upvotes: 0

Suresh Ram
Suresh Ram

Reputation: 1034

Remove your tmp folder(contains datanode and namenode) and then format your namenode.

hadoop namenode - format

Your problem will be solved.

Upvotes: 0

pckmn
pckmn

Reputation: 500

You can try :

hadoop datanode -rollback

Upvotes: 0

Related Questions