David Beveridge
David Beveridge

Reputation: 560

Hadoop: Using a Custom Object in a Mapper's Output

I am new to Hadoop and am stumped by something:

What I'm trying to do is take in a list of text-entries in files and have an initial mapper do some crunching on them and then output a customized object to be aggregated by the reducer.

I put together a framework using all text values OK--but when I try to change to using our own objects, I get a NPE (shown below)

Here is the Driver's run():

JobConf conf = new JobConf( getConf(), VectorConPreprocessor.class );
conf.setJobName( JOB_NAME + " - " + JOB_ISODATE );           
m_log.info("JOB NAME:  " + conf.getJobName() );

// Probably need to change this to be a chain-mapper later on . . . . 

conf.setInputFormat(  TextInputFormat.class          );    // reading text from files

conf.setMapperClass(         MapMVandSamples.class  );
conf.setMapOutputValueClass( SparsenessFilter.class );

//conf.setCombinerClass( CombineSparsenessTrackers.class );  // not using combiner, because ALL nodes must be gathered before reduction     
conf.setReducerClass(  ReduceSparsenessTrackers.class  );    // not sure reducing is required here . . . . 

conf.setOutputKeyClass(   Text.class );    // output key will be the SHA2
conf.setOutputValueClass( Text.class );    // output value will be the FeatureVectorMap
conf.setOutputFormat(     SequenceFileOutputFormat.class );    // binary object writer          

And here is the Mapper:

public class MapMVandSamples extends MapReduceBase implements Mapper<LongWritable, Text, Text, SparsenessFilter> 
{

    public static final String DELIM = ":";
    protected static Logger m_log    = Logger.getLogger( MapMVandSamples.class );    

    // In this case we're reading a line of text at a time from the file
    // We don't really care about the SHA256 for now, just create a SparsenessFilter
    //   for each entry.  The reducer will aggregate them later.
    @Override
    public void map( LongWritable bytePosition, Text lineOfText, OutputCollector<Text, SparsenessFilter> outputCollector, Reporter reporter ) throws IOException
    {                
        String[] data = lineOfText.toString().split( DELIM, 2 );
        String sha256 = data[0];
        String json   = data[1];

        // create a SparsenessFilter for this record
        SparsenessFilter sf = new SparsenessFilter();
        // crunching goes here

        outputCollector.collect( new Text("AllOneForNow"), sf );    
    }

}

And, finally, the error:

14/03/05 21:56:56 INFO mapreduce.Job: Task Id : attempt_1394084907462_0002_m_000000_1, Status : FAILED
Error: java.lang.NullPointerException
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:989)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:390)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

Any ideas? Do I need to implement an interface on our SparsenessFilter to be able to have the Mapper's OutputCollector handle it?

Thanks!

Upvotes: 2

Views: 2680

Answers (2)

Venkat
Venkat

Reputation: 1810

All Custom Keys and values should implement WritableComparable interface.

You need to implement readFields(DataInput in) & write(DataOutput out) & also compareTo.

Example

Upvotes: 2

Mehraban
Mehraban

Reputation: 3324

Hadoop Text and IntWritable both implement these interfaces:

  1. Comparable
  2. Writable
  3. WritableComparable

I didn't find any document explicitly about what a Key or Value class needs to implement, but maybe Comparable interfaces are related to being a Key class and Writable interface is related to being a Value.

Upvotes: 1

Related Questions