Reputation: 17648
A recent build of MR2 basic examples were failing, i.e. running the pi example, in a psuedo distributed MR2 HDFS cluster, with the following error:
13/07/06 21:20:47 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=EXECUTE, inode="/tmp/hadoop-yarn/staging":mapred:mapred:drwxrwx---
Why could this be happening ?
Upvotes: 4
Views: 8429
Reputation: 143
First, you need to create the temporary folder correctly. Using the hadoop user, run the following commands:
$ hdfs dfs -mkdir /tmp
$ hdfs dfs -chmod -R 1777 /tmp
You might want to remove the current content of the /tmp directory.
For hive users, if you have a similar problem with scratchdir, edit file hive/conf/hive-site.xml
<property>
<name>hive.exec.local.scratchdir</name>
<value>${system:java.io.tmpdir}/${system:user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>777</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>
Upvotes: 1
Reputation: 11
U was getting this error in HDP to run a example jar file for wordcount caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=EXECUTE, inode="/user/root/.staging":hdfs:hdfs:drwx------
From the hdfs user chmod 777
on the /user directory and I could use my ubuntu user a sudoer to run a .jar file. Also I could use the hdfs user to run the jar.
Upvotes: -1
Reputation: 872
Add yarn.app.mapreduce.am.staging-dir
on your mapred-site.xml
like this:
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
This configuration presumes that the user account, in your case root
, has its home directory /user/root
on the HDFS, and the staging directory will be created as /user/root/.staging
where the user account already has the right permissions.
For more information, check out "Step 4: Configure the Staging Directory" on the followig link.
Upvotes: 6
Reputation: 17648
The solution, simply change the /tmp/hadoop-yarn permissions:
sudo -u hdfs hadoop fs -chmod -R 777 /tmp/hadoop-yarn
Leaves to the imagination how it is that this directory could end up with incorrect permissions given that it was entirely created by hadoop's internal lifecycle.
(Comments would be appreciated)
Upvotes: 5