hbr
hbr

Reputation: 469

Hadoop mapreduce program fails with exitcode 127

Tyring to run some hadoop program. I see NameNode, Datanode, Yarn cluster URL up and running. i.e. 127.0.0.1:50070 /dfshealth.jsp, localhost:8088 /cluster/cluster, etc

But When i try to run my mapreduce program as : $ hadoop MySampleProgram hdfs://localhost/user/cyg_server/input/myfile.txt hdfs: //localhost/user/cyg_server/output/op

The program fails with logs:

INFO mapreduce.Job (Job.java:monitorAndPrintJob(1295)) - map 0% reduce 0%

INFO mapreduce.Job (Job.java:monitorAndPrintJob(1308)) - Job job_1354496967950_0003 failed with state FAILED due to: Application application_1354496967950_0003 failed 1 times due to AM Container for appattempt_1354496967950_0003_000001 exited with exitCode: 127 due to: .Failing this attempt.. Failing the application.

2012-12-03 07:29:50,544 INFO mapreduce.Job (Job.java:monitorAndPrintJob(1313)) - Counters: 0

When i did through some of the logs i notice this: nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(193)) - Exit code from task is : 127

I am running in Windows 7, with cygwin.

Any input is greatly appreciated.

:::ADDING MORE INFO HERE::: As of now i can see that the following hadoop source while execution [trying to set launch container] fails... I am adding the source URL for that file here.... (note this is not hadoop error but i am pointing out but some thing i am missing).... Class:DefaultContainerExecutor Method:launchContainer Lines: from the start of the method launchContainer to 195 where it print the code.

http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-yarn-server-nodemanager/0.23.1/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java#193

NODE MANAGER LOG EXTRACT

INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(175)) - launchContainer: [bash, /tmp/nm-local-...2936_0003/container_1354566282936_0003_01_000001/default_container_executor.sh]

WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(193)) - Exit code from task is : 127

INFO nodemanager.ContainerExecutor (ContainerExecutor.java:logOutput(167)) -

WARN launcher.ContainerLaunch (ContainerLaunch.java:call(274)) - Container exited with a non-zero exit code 127

Thanks Hari

Upvotes: 3

Views: 7965

Answers (2)

KayV
KayV

Reputation: 13835

Hard coding the java home path inside hadoop-env.sh solved the issue for me as follows:

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home

Upvotes: 3

Yevgen Yampolskiy
Yevgen Yampolskiy

Reputation: 7180

I ran into this issue when I tried to use libraries which are not included in the standard Hadoop distribution (org.apache.lucene in my case). Solution was to add missing libraries to yarn classpath using "yarn.application.classpath" configuration property:

    String cp = conf.get("yarn.application.classpath");
    String home=System.getenv("HOME");
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-core/4.4.0/*";
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-analyzers/4.4.0/*";
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-analyzers-common/4.4.0/*";
    conf.set("yarn.application.classpath", cp);

Upvotes: 1

Related Questions