Razvi
Razvi

Reputation: 2818

Hadoop error on executing job

I tried to run an example and get the following output:

12/06/30 12:27:39 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/06/30 12:27:39 INFO input.FileInputFormat: Total input paths to process : 7
12/06/30 12:27:40 INFO mapred.JobClient: Running job: job_local_0001
12/06/30 12:27:40 INFO input.FileInputFormat: Total input paths to process : 7
12/06/30 12:27:40 INFO mapred.MapTask: io.sort.mb = 100
12/06/30 12:27:41 INFO mapred.MapTask: data buffer = 79691776/99614720
12/06/30 12:27:41 INFO mapred.MapTask: record buffer = 262144/327680
12/06/30 12:27:41 INFO mapred.JobClient:  map 0% reduce 0%
12/06/30 12:27:41 INFO mapred.MapTask: Starting flush of map output
12/06/30 12:27:41 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Expecting a line not the end of stream
    at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
    at org.apache.hadoop.util.Shell.run(Shell.java:134)
    at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:329)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
    at org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1221)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1129)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:549)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:623)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/06/30 12:27:42 INFO mapred.JobClient: Job complete: job_local_0001
12/06/30 12:27:42 INFO mapred.JobClient: Counters: 0

Does anyone know why I get this error? Hadoop version is 0.20.2.

Upvotes: 0

Views: 887

Answers (1)

Razvi
Razvi

Reputation: 2818

Apparently you need to have the df command available on the machine on which you have eclipse too. In my case I had 2 ubuntu VMs (acting as master and slave) and was running eclipse with the hadoop plugin from windows. After installing cygwin and adding it to the path it doesn't give that error anymore.

Upvotes: 3

Related Questions