Thomas Hubregtsen
Thomas Hubregtsen

Reputation: 439

Hadoop log files missing

Coming from an older version of Hadoop, I am looking for the user log files (log.index, stderr, stdout, syslog) in Hadoop 2.2.0. I first looked for the web interface, but there is nothing running at port 50030. I then looked HADOOP_HOME_DIR/logs, but I did not see a userlogs dir, nor something with a jobnumber. Nex location I looked was temp dir (/tmp), and found folders that looked like it:

$ find . -name "job_local1643076800_0001"
./hadoop-tom/mapred/staging/tom1643076800/.staging/job_local1643076800_0001
./hadoop-tom/mapred/local/localRunner/tom/jobcache/job_local1643076800_0001
./hadoop-tom/mapred/local/localRunner/tom/job_local1643076800_0001

In here I found directories in the format I expected: attempt_local1643076800_0001_m_000000_0 But they were empty.

I also set "export HADOOP_LOG_DIR=/path", but this does not fill up either. Anything I am missing here? Or something that went wrong when I build hadoop from source?

Thanks in advance!

Upvotes: 1

Views: 1465

Answers (2)

Lauri Peltonen
Lauri Peltonen

Reputation: 1542

The user logs should be under folder userlogs under the logs folder. So something like hadoop-2.2.0/logs/userlogs/. Check the jps command to see if you have the needed processes running - probably not. If not, check your logs (not the userlogs) for errors and go through your configurations.

If your processes are not running, I wonder how you managed to run some jobs. Did your jobs finish nicely?

Upvotes: 1

SachinJose
SachinJose

Reputation: 8522

Default mapreduce framework in hadoop 2.2.0 is YARN. 50030 port is Jobtracker webUI, In YARN instead of jobtracker it uses Resource manager for whose Webui is accessible at 8088.

In your case job is being executed as localRunner, which means either from Client side, YARN is not configured properly or YARN services are down. Validate your configurations.

Upvotes: 0

Related Questions