John Chrysostom
John Chrysostom

Reputation: 3963

Failed to setup local dir in Hadoop on Windows

Trying to run MapReduce jobs on Windows, when I got an error like this:

Error: Application application_1441785420720_0002 failed 2 times due to AM Container for appattempt_1441785420720_0002_000002 exited with exitCode:-1000

Diagnostics:
Application application_1441785420720_0003 failed 2 times due to AM Container for appattempt_1441785420720_0003_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://HOST:8088/cluster/app/application_1441785420720_0003 Then, click on links to logs of each attempt.

Diagnostics: Failed to setup local dir /tmp/hadoop-USER/nm-local-dir, which was marked as good. Failing this attempt. Failing the application.

Everything worked fine yesterday, and nothing about the Java Environment, file permissions, or Hadoop configurations has changed.

Upvotes: 2

Views: 5122

Answers (5)

John Chrysostom
John Chrysostom

Reputation: 3963

This is a bug related to how Hadoop 2.7 understands file permissions on Windows when you have an office domain and are not currently connected to it (e.g., because you're working remote).

The long-term fix is to upgrade to Hadoop 2.8+.

The short-term fix is to VPN into your office when working remote so that you can be connected to your office domain and Hadoop will somehow understand your permissions correctly.

Upvotes: 4

Megh Patel
Megh Patel

Reputation: 31

One solution to the problem is to run the command prompt as administrator and then running the scripts like "start-all.cmd". I think the administrator privileges will solve the problem.

Upvotes: 2

Jordan Huffaker
Jordan Huffaker

Reputation: 46

There seems to be a bug in Hadoop 2.7 related to file permissions on windows. You can either fix the bug by editing the code directly or save yourself the headache by upgrading to Hadoop 2.8.

Upvotes: 1

Khanh Duy Pham
Khanh Duy Pham

Reputation: 79

You should run CMD with "Run as Administrator" to fix this as the Hadoop cluster environment is on WINDOWS operating system

Upvotes: 7

Manoj Kumar G
Manoj Kumar G

Reputation: 502

This is a permissions issue. Sometime back I too got this error while trying to submit the a mapreduce job. The OS was CentOS and not Windows. But however, the cause of this error is same. The /tmp directory has been created by "hdfs" user and belongs to "supergroup". It some user who does not belong to supergroup try to submit a job, the user is sure to get this error.

Name    User    Group       

tmp     hdfs    supergroup  

This is the reason why when you login to your office domain and submit the job, the job executes.

Upvotes: 0

Related Questions