Reputation: 11849
I created my term vectors as stated here like this:
~/Scripts/Mahout/trunk/bin/mahout seqdirectory --input /home/ben/Scripts/eipi/files --output /home/ben/Scripts/eipi/mahout_out -chunk 1
~/Scripts/Mahout/trunk/bin/mahout seq2sparse -i /home/ben/Scripts/eipi/mahout_out -o /home/ben/Scripts/eipi/termvecs -wt tf -seq
Then I run
~/Scripts/Mahout/trunk/bin/mahout lda -i /home/ben/Scripts/eipi/termvecs -o /home/ben/Scripts/eipi/lda_working -k 2 -v 100
and I get:
MAHOUT-JOB: /home/ben/Scripts/Mahout/trunk/examples/target/mahout-examples-0.6-SNAPSHOT-job.jar 11/09/04 16:28:59 INFO common.AbstractJob: Command line arguments: {--endPhase=2147483647, --input=/home/ben/Scripts/eipi/termvecs, --maxIter=-1, --numTopics=2, --numWords=100, --output=/home/ben/Scripts/eipi/lda_working, --startPhase=0, --tempDir=temp, --topicSmoothing=-1.0} 11/09/04 16:29:00 INFO lda.LDADriver: LDA Iteration 1 11/09/04 16:29:01 INFO input.FileInputFormat: Total input paths to process : 4 11/09/04 16:29:01 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop-ben/mapred/staging/ben692167368/.staging/job_local_0001 Exception in thread "main" java.io.FileNotFoundException: File file:/home/ben/Scripts/eipi/termvecs/tokenized-documents/data does not exist. at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:371) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245) at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:63) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:902) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:919) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:838) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:791) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:791) at org.apache.hadoop.mapreduce.Job.submit(Job.java:465) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:494) at org.apache.mahout.clustering.lda.LDADriver.runIteration(LDADriver.java:426) at org.apache.mahout.clustering.lda.LDADriver.run(LDADriver.java:226) at org.apache.mahout.clustering.lda.LDADriver.run(LDADriver.java:174) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.mahout.clustering.lda.LDADriver.main(LDADriver.java:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
It's right, that file doesn't exist. How am I supposed to create it?
Upvotes: 0
Views: 718
Reputation: 58
The vectors might be empty as there may be a problem in their creation. Check if your vectors are successfully created in their folders (have not a file size of 0 bytes). This error may occur if you are input folder is missing some files. In that case, these two steps will work although not create a valid output.
Upvotes: 0