Don E
Don E

Reputation: 241

How do i shake off NullPointerException?

Cant seem to shake off a NullPointerException Error. Been coding for about a year now. Anyway, I have tried every possible combo of !xyz.equals(null) or xyz[0] != null, but can't seem to find a solution. I have spent about 2 hours now. Help will be much appreciated. Thanks.

public class PedMapReducer extends Reducer<Text, Text, NullWritable, Text> 
{

Map<String, String> map = new LinkedHashMap<String, String>();
Map<String, String> ped = new LinkedHashMap<String, String>();
Set<String> s = new LinkedHashSet<String>();
List<String> arr = new ArrayList<String>();
String joined = null;

public void reduce(Text key, Iterable<Text> values, Context context)
{

    String [] lines = values.toString().split(",");

    if (lines[0] != null && lines[0] == "map_")
    {
        map.put(lines[1], lines.toString());
    }

    else if (lines[0] != null && lines[0] == "ped_")
    {
        ped.put(lines[1], lines.toString());
    }

}

public void cleanup(Context context) throws IOException, InterruptedException
{
    if(!map.entrySet().equals(null) && !ped.entrySet().equals(null))
    {
        for (Entry<String, String> entMap: map.entrySet())
        {
            for(Entry<String, String> entPed: ped.entrySet())
            {
                if(entMap.getKey().equals(entPed.getKey()))
                    joined = entMap.getValue() + "," + entPed.getValue();
            }
        }
        context.write(NullWritable.get(), new Text(joined));
    }


}

}

STACK

14/10/20 16:15:03 INFO mapreduce.Job: Task Id : attempt_1413663101908_0026_r_000110_0, Status : FAILED
Error: java.lang.NullPointerException
        at org.apache.hadoop.io.Text.encode(Text.java:450)
        at org.apache.hadoop.io.Text.set(Text.java:198)
        at org.apache.hadoop.io.Text.<init>(Text.java:88)
        at Map_Ped1.PedMapReducer.cleanup(PedMapReducer.java:49)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:179)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

Upvotes: 1

Views: 1027

Answers (1)

Khary Mendez
Khary Mendez

Reputation: 1848

When checking for string equality the code is using == instead of .equals, so no entries will ever be added to either collection. Use .equals() instead:

if (lines[0] != null && lines[0].equals("map_") )
{
    map.put(lines[1], lines.toString());
}

else if (lines[0] != null && lines[0].equals("ped_") )
{
    ped.put(lines[1], lines.toString());
}

Since those collections (map and ped) are empty, joined will be null. new Text(joined) is being passed a null, which is probably the source of the null pointer exception. Text.<init> implies new Text() is the source of the null pointer exception:

 at org.apache.hadoop.io.Text.<init>(Text.java:88)

Upvotes: 2

Related Questions