Gayathri priya
Gayathri priya

Reputation: 9

Reducer is not being called

This is a code for ebola data set. Here the reducer is not being called at all. Mapper output is only being printed.

The driver class:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class Ebola {
        public static void main(String[] args) throws Exception , ArrayIndexOutOfBoundsException{

                Configuration con1 = new Configuration();
                con1.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", " "); 
                Job job1 = new Job(con1, "Ebola");

                job1.setJarByClass(Ebola.class); 
                job1.setInputFormatClass(KeyValueTextInputFormat.class);
                job1.setOutputFormatClass(TextOutputFormat.class);        
                job1.setOutputKeyClass(Text.class);
                job1.setOutputValueClass(Text.class);
                job1.setMapperClass(EbolaMapper.class);      
                job1.setReducerClass(EbolReducer.class);

                FileInputFormat.addInputPath(job1, new Path(args[0]));        
                FileOutputFormat.setOutputPath(job1, new Path(args[1]));
                job1.waitForCompletion(true);
        }
}

This is the mapper:

import java.io.IOException;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
public class EbolaMapper extends Mapper <Text, Text, Text, Text> {
        public void map(Text key, Text value, Context con) throws IOException, InterruptedException {
                Text cumValues = new Text();
                String record = value.toString();

                String p[] = record.split(" ",2);

                String cases = p[0];
                String death = p[1];

                String cValues =  death + "->" + cases;

                cumValues.set(cValues);

                con.write(key, cumValues);                  
        }
}

Finally, the reducer:

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class EbolReducer extends Reducer<Text, Text, Text, Text> {
        public void reduce(Text key, Text value, Context con) throws IOException{
                Text cumulValues = new Text();                  
                String cumVal = value.toString();
                String[] p = cumVal.split("->",2);
                String death = p[0];
                String cases = p[1];
                Float d = Float.parseFloat(death);
                Float c = Float.parseFloat(cases);
                Float perc = (d/c)*100;
                String percent = String.valueOf(perc);
                cumulValues.set(percent);
                con.write(key,cumulValues);
        }
}

The output is just the mapper output. The reducer is not being called. Any help would be appreciated.

Upvotes: 0

Views: 357

Answers (1)

user3484461
user3484461

Reputation: 1143

Instead of public void reduce(Text key, Text value, Context con)

you need to use iterable .

public void reduce(Text key, Iterable< Text> value, Context con)

Upvotes: 1

Related Questions