Reputation: 19011
I am really having a hard time struggling with running Hbase-MapReduce with Hadoop.
I do use Hadoop Hortonwork 2 version. HBase version that I use is 0.96.1-hadoop2. Now when I try to run my MapReduce like this :
hadoop jar target/invoice-aggregation-0.1.jar start="2014-02-01 01:00:00" end="2014-02-19 01:00:00" firstAccountId=0 lastAccountId=10
Hadoop tells me that is can not find the invoice-aggregation-0.1.jar in its file system ?! I am wondering why does it need to be there ?
Here is the error I get
14/02/05 10:31:48 ERROR security.UserGroupInformation: PriviledgedActionException as:adio (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at com.company.invoice.MapReduceStarter.main(MapReduceStarter.java:244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I would appreciate any suggestion, help or even I guess why I am getting this error ?
Upvotes: 4
Views: 1078
Reputation: 853
In my case error was fixed by copying mapred-site.xml to HADOOP_CONF_DIR directory
Upvotes: 0
Reputation: 1378
Include the JAR in the “-libjars” command line option of the hadoop jar …
command
or check for other alternatives here
Upvotes: 0
Reputation: 19011
Ok, even I am not sure that this is the best solution I solved my problem by adding my application jar and all missing jars to HDFS. Using Hadoop fs -copyFromLocal 'myjarslocation' 'where_hdfs_needs_the_jars'. So whenever MepReduce throws exception telling you that some jar is missing in some location on HDFS add the jar to that place. This is what I did to solve my problem. If anyone has a better approach I would be please to hear it.
Upvotes: 0
Reputation: 2695
Error is due to hadoop cannot find the jars on place.
Place the jars and re-run the job. This will resolve the problem.
Upvotes: 1