MRK
MRK

Reputation: 21

Error while running Mapreduce program

am getting the following error while Running a Map-reduce program.

The program is to sort the o/p using TotalOrderpartition.

I have 2 node cluster. 
when i run teh program with -D mapred.reduce.tasks=2 its working fine
 But its failing with below error while running with -D mapred.reduce.tasks=3 option.


java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:448)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
        ... 6 more
Caused by: java.lang.IllegalArgumentException: Can't read partitions file
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:91)
        ... 11 more
Caused by: java.io.IOException: Split points are out of order
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:78)
        ... 11 more

Plese let me know whats wrong here?

Thanks
R

Upvotes: 1

Views: 3844

Answers (3)

lanyun
lanyun

Reputation: 141

I also met this problem, through the check soucecode found that because of sample, increase the reduce number makes in the splitpoint have the same element, so throw this error. It has relation with the data. type hadoop fs - text _partition look at the files generated the partition, if your tasks failure there must has same element.

Upvotes: 0

London guy
London guy

Reputation: 28012

The maximum number of reducers that can be mentioned is equal to the number of nodes in your cluster. Since the number of nodes is 2 here, you cannot set the number of reducers to be greater than 2.

Upvotes: 2

cftarnas
cftarnas

Reputation: 1755

Sounds like you don't have enough keys in your partition file. The docs say that TotalOrderpartitioner requires that you have at least N - 1 keys, where N is the number of reducers, in your partition SequenceFile.

Upvotes: 1

Related Questions