Netap
Netap

Reputation: 193

Hadoop passing variables from reducer to main

I am working on a map reduce program. I'm trying to pass parameters to the context configuration in the reduce method using the setLong method and then after completion read them in the main

in reducer:

context.getConfiguration().setLong(key, someLong);

In the Main after the job completion i try to read using :

long val = job.getConfiguration().getLong(key, -1);

but i always get -1.

when i try reading inside the reducer i see that the value is set and i get the correct answer.

am i missing something?

Thank you

Upvotes: 0

Views: 639

Answers (2)

Radim
Radim

Reputation: 4808

You can use counters: set&update their value in reducers and then you can access them in your client application (Main).

Upvotes: 3

yanghaogn
yanghaogn

Reputation: 863

You can translate configuration from main to map task or reduce task, but you cannot translate it back. The procedure of configuration translation is:

  • A configuration file is generated on the MapReduce client based on the configuration you set on main, and it will be pushed to a HDFS path only shared by the job. The file will be readonly
  • When launching a map or reduce task, the configuration file is pulled from the HDFS path, and task init the configuration based by the file.

If you want to translate configuration back, you may use another HDFS file: update the file on Reducer, and read it after job completes

Upvotes: 2

Related Questions