Blue
Blue

Reputation: 611

Why do i get 3xx duplicates in hadoop's mapreduce?

I'm using hadoop's mapreduce to read in a file from hdfs, put it through a simple parser, and write the output of that parser back to hdfs. I don't have a reduce task yet. I'm wondering why i get about 300 duplicates in my output file.

Here is my map method.

    public void map(LongWritable key, Text value,
            OutputCollector<Text, Text> output, Reporter reporter)
            throws IOException {

        FileSplit fsplit = (FileSplit) reporter.getInputSplit();
        Main parser = new Main();
        String datFilePath = fsplit.getPath().getName();
        String valueMap = "/path/to/file";

        Path pt = fsplit.getPath();

        FileSystem fs = null;
        try {
            fs = FileSystem.get(new URI("hdfs://xxx.xxx.x.x:xxxx"),
                    new Configuration());
        } catch (URISyntaxException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

        try (FSDataInputStream inputStream = fs.open(pt)) {
            ReadableByteChannel channel = Channels.newChannel(inputStream);
            ByteBuffer buffer = ByteBuffer.allocate((int) fs.getFileStatus(pt).getLen());
            channel.read(buffer);
            buffer.order(ByteOrder.LITTLE_ENDIAN);

            SimpleKeyValueStructure map = parser.parse(datFilePath, buffer,
                    valueMap);

            String lrtransPath = map.getInputIdentifier();
            SortedMap<String, Object> data = map.getData();
            for (Entry<String, Object> entry : data.entrySet()) {
                term.set(entry.getKey());
                pathToFile.set(entry.getValue().toString());
                output.collect(term, pathToFile);
            }
            count += 1;
            System.out.println(count);
        }
    }
}

I print out the count in the end and it is indeed 3xx. Is this a configuration problem? My job configuration:

JobConf conf = new JobConf(MapReduce.class);
        conf.setJobName("jobxyz");

        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(Text.class);

        conf.setMapperClass(Map.class);
        conf.setCombinerClass(Reduce.class);
        conf.setReducerClass(Reduce.class);
        conf.setNumReduceTasks(0);

        conf.setInputFormat(TextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);

        FileInputFormat.setInputPaths(conf, new Path(args[0]));
        FileOutputFormat.setOutputPath(conf, new Path(args[1]));

        JobClient.runJob(conf);

The output is completely correct, but duplicated.

Upvotes: 0

Views: 112

Answers (1)

Phani Rahul
Phani Rahul

Reputation: 860

A Mapper is called for every input split of the file; And the Mapper's map() is called for every record. Since your code is running for every record in every input split, you are getting duplicates.

Upvotes: 1

Related Questions