Umes Bastola
Umes Bastola

Reputation: 516

Map-reduce via Oozie

If I am using Oozie to run MapReduce job, is there a specific number about how many mappers will be started? Is it:

  1. one for Oozie and one for map-reduce job or
  2. one for Oozie and one mapper for every 64MB block(default block size)

Upvotes: 2

Views: 488

Answers (3)

Ram Ghadiyaram
Ram Ghadiyaram

Reputation: 29155

Short answer : Oozie launches mapreduce job by submitting one maponly job to the cluster called Oozie launcher. Agree with @Dennis Jaheruddin.

Detail answer after my research : Oozie's execution model

Oozie’s execution model is different from the default approach users take to run Hadoop jobs. When a user invokes the Hadoop, Hive, or Pig CLI tool from a Hadoop edge node, the corresponding client executable runs on that node which is configured to contact and submit jobs to the Hadoop cluster. When the same jobs are defined and submitted via an Oozie workflow action, things work differently.

Let’s say you are submitting a workflow job using the Oozie CLI on the edge node. The Oozie client actually submits the workflow to the Oozie server, which typically runs on a different node. Regardless of where it runs, it’s the Oozie server’s responsibility to submit and run the underlying MapReduce jobs on the Hadoop cluster. Oozie doesn’t do so by using the standard client tools installed locally on the Oozie server node. Instead, it first submits a MapReduce job called the “launcher job,” which in turn runs the Hadoop, Hive, or Pig job using the appropriate client APIs.

Imp Note : The Oozie launcher is basically a map-only job running a single mapper on the Hadoop cluster. This map job knows what to do for the specific action it’s supposed to run and does the appropriate thing by using the libraries for Hadoop, Pig, etc. This will result in other MapReduce jobs being spun up as required. These Oozie jobs are called “asynchronous actions” in Oozie parlance. Oozie doesn’t run these actions in its own server, but kicks them off on the Hadoop cluster using a launcher job. The reason Oozie server “outsources” the launcher to the Hadoop cluster is to protect itself from unexpected workloads and also to isolate user code from its own services. After all, Oozie has access to an awesome distributed system in the form of a Hadoop cluster.

enter image description here

Coming to Mapreduce actions you can set number of maptasks but there is no guarantee, it will depend as described below.

The number of maps is usually driven by the total size of the inputs, that is, the total number of blocks of the input files.

Number of Maps

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute

Upvotes: 1

Dennis Jaheruddin
Dennis Jaheruddin

Reputation: 21563

The above answers focus on how many maps and reduces a mapreduce job needs. However as you specifically ask about oozie, I will share my experience on mapreduce (in pig) via Oozie.

Explanation

When you kick off an oozie workflow, you need 1 yarn application for this. I am not sure what the logic is, but it appears that these applications usually require 1 map, and occasionally 2.

Besides the above, you still need the same amount of mappers and reducers to do the actual work as if you did not use oozie. (If you see a different number than you expected, this may be because you passed specific parameters on map or reduce properties when calling the script).

Warning

The above means, that if you were to have 100 available containers, and kickoff 100 workflows (for example by starting a daily job with a startdate of 100 days in the past), it is likely that the workflows take up all available containers, and the actual work is suspended indefinitely.

Upvotes: 3

siddhartha jain
siddhartha jain

Reputation: 1006

Number of mapper depend on number of logical input splits it do not depends on number of blocks. You can control number of input splits by your programme.

Refer this https://hadoopi.wordpress.com/2013/05/27/understand-recordreader-inputsplit/ for more information about how input splits effects number of mapper and how to create input splits.

Upvotes: -1

Related Questions