USB
USB

Reputation: 6139

MR implementation not working in Hadoop cluster

Decision tree is working perfectly in Eclipse Juno.

But when i tried to run that in my cluster it is showing error.

Folder "n " is created in my local disk /user/sree

When i tried hadoop fs -ls /user/sree/n

Nothing is in n and no "intermediate" files are created in my /user/sree/n why is it so? It is working perfectly in Eclipse.

Any suggesions.

** UPDATE **

I updated my code to

1.Instead of

BufferedWriter bw = new BufferedWriter(new FileWriter(new File("n/intermediate"+id.current_index+".txt"), true));

in Reduce.java changed to

BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fs.create(new Path("n/intermediate"+id.current_index+".txt"), true)));

2.Instead of

fstream = new FileInputStream("n/intermediate"+id.current_index+".txt");
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));

in GainRatio.java changed to

BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(new Path("n/intermediate"+id.current_index+".txt"))));

It is executing,but it is not completely executed.

I am not able to get the final out. Am i doing anything wrong.

Upvotes: 0

Views: 250

Answers (1)

Thomas Jungblut
Thomas Jungblut

Reputation: 20969

Because

 BufferedWriter bw = new BufferedWriter(new FileWriter(new File("C45/intermediate"+id.current_index+".txt"), true));
                            bw.write(text);

Writes to the local disk and not to HDFS. Thus you have to look for it in your local filesystem.

Upvotes: 2

Related Questions