Reputation: 734
To be more specific:
Upvotes: 8
Views: 471
Reputation: 2049
'CPU_MILLISECONDS' counter can give you info about - Total time spent by all tasks on CPU.
'REDUCE_SHUFFLE_BYTES' higher the number , higher the n/w utilization. (lot more opts availble like this)
There are 4 categories of counters in Hadoop: file system, job, framework, and custom.
You can use the built-in counters to validate that:
1.The correct number of bytes was read and written
2.The correct number of tasks was launched and successfully ran
3.The amount of CPU and memory consumed is appropriate for your job and cluster nodes
4.The correct number of records was read and written
more info avalible @ https://www.mapr.com/blog/managing-monitoring-and-testing-mapreduce-jobs-how-work-counters#.VZy9IF_vPZ4 (**credits- mapr.com)
Upvotes: 1