vana
vana

Reputation: 31

Classnotfound exception while running hadoop

I am new to hadoop.

I have a file Wordcount.java which refers hadoop.jar and stanford-parser.jar

I am running the following commnad

javac -classpath .:hadoop-0.20.1-core.jar:stanford-parser.jar -d ep WordCount.java 

jar cvf ep.jar -C ep .

bin/hadoop jar ep.jar WordCount gutenburg gutenburg1

After executing i am getting the following error:

lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.LexicalizedParser

The class is in stanford-parser.jar ...

What can be the possible problem?

Thanks

Upvotes: 3

Views: 9517

Answers (5)

Kuro Kurosaka
Kuro Kurosaka

Reputation: 21

I've just found out that you can simply edit $HADOOP_HOME/conf/hadoop-env.sh and add your JARs to HADOOP_CLASSPATH. This is probably simplest and most efficient.

Upvotes: 1

Kuro Kurosaka
Kuro Kurosaka

Reputation: 21

I had the same problem. I think the reason -libjars option doesn't get recognized by your program is because you are not parsing it by calling GenericOptionsParser.getRemainingArgs(). In Hadoop 0.21.0's WordCount.java example (in mapred/src/examples/org/apache/hadoop/examples/), this pieces of code is found, and after doing the same in my program, -libjars comma-separated-jars is recognized:

String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
  System.err.println("Usage: wordcount <in> <out>");
  System.exit(2);
}

...
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

Upvotes: 1

dehowell
dehowell

Reputation: 19

mdma is on the right track, but you'll also need your job driver to implement Tool.

Upvotes: 1

Binary Nerd
Binary Nerd

Reputation: 13937

Another option you can try since the -libjars doesn't seem to be working for you is to package everything into a single jar, ie your code + the dependencies into a single jar.

This was how it had to be done prior to ~Hadoop-0.18.0 (somewhere around there they fixed this).

Using ant (i use ant in eclipse) you can set up a build that unpacks the dependencies and adds them to the target build project. You can probably hack this yourself though, by manually unpacking the dependency jar and adding the contents to your jar.

Even though I use 0.20.1 now I still use this method. It makes starting a job form the command-line simpler.

Upvotes: 0

mdma
mdma

Reputation: 57777

I think you need to add the standford-parser jar when invoking hadoop also, not just the compiler. (If you look in ep.jar, I imagine it will only have one file in it - WordCount.class)

E.g.

bin/hadoop jar ep.jar WordCount -libjars stanford-parser.jar gutenburg gutenburg1

See Map/Reduce Tutorial

Upvotes: 2

Related Questions