L. Norman
L. Norman

Reputation: 483

Error Running Yarn Jar MRAppMaster NoSuchMethodERror

I am running out of ideas.... I have tried numerous configurations and nothing is working. I am trying to run a jar file via yarn on my hadoop cluster only to get:

2020-10-07 21:27:01,960 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1602101475531_0003_000002
2020-10-07 21:27:02,145 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
/************************************************************
[system properties]
###
************************************************************/
2020-10-07 21:27:02,149 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/data/hadoop/yarn/usercache/hdfs-user/appcache/application_1602101475531_0003/filecache/11/job.jar/job.jar by sun.misc.Launcher$AppClassLoader@8da96717) called from class org.apache.hadoop.conf.Configuration (loaded from file:/data/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar by sun.misc.Launcher$AppClassLoader@8da96717).
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1380)
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1361)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1690)
2020-10-07 21:27:02,152 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/data/hadoop/yarn/usercache/hdfs-user/appcache/application_1602101475531_0003/filecache/11/job.jar/job.jar by sun.misc.Launcher$AppClassLoader@8da96717) called from class org.apache.hadoop.conf.Configuration (loaded from file:/data/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar by sun.misc.Launcher$AppClassLoader@8da96717).

My mapred-site.xml:

<configuration>
    <property>
        <name>mapreduce.cluster.temp.dir</name>
        <value>/tmp/hadoop-mapred</value>
        <final>true</final>
    </property>

    <property>
        <name>mapred.job.tracker</name>
        <value>###</value>
    </property>

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>The runtime framework for executing MapReduce jobs.
            Can be one of local, classic or yarn.
        </description>
    </property>

    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>3072</value>
    </property>

    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>2048</value>
    </property>

    <property>
        <name>mapreduce.shuffle.port</name>
        <value>5010</value>
    </property>

    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>256</value>
    </property>

    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>64</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
        <name>mapreduce.map.env</name>
        <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
       <name>mapreduce.reduce.env</name>
       <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>/data/hadoop-3.3.0/etc/hadoop:/data/hadoop-3.3.0/share/hadoop/common/lib/*:/data/hadoop-3.3.0/share/hadoop/common/*:/data/hadoop-3.3.0/share/hadoop/hdfs:/data/hadoop-3.3.0/share/hadoop/hdfs/lib/*:/data/hadoop-3.3.0/share/hadoop/hdfs/*:/data/hadoop-3.3.0/share/hadoop/mapreduce/*:/data/hadoop-3.3.0/share/hadoop/yarn:/data/hadoop-3.3.0/share/hadoop/yarn/lib/*:/data/hadoop-3.3.0/share/hadoop/yarn/*</value>
     </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>###</value> <!-- hostname of machine  where jobhistory service is started -->
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>###</value>
    </property>


</configuration>

and yarn-site.xml:

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/data/hadoop/yarn</value>
        <description>Comma separated list of paths. Use the list of directories from $YARN_LOCAL_DIR.For example, /grid/hadoop/yarn/local,/grid1/hadoop/yarn/ local.</description>
    </property>

    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/data/hadoop/yarn-logs</value>
        <description>Use the list of directories from $YARN_LOCAL_LOG_DIR. For example, /grid/hadoop/yarn/log,/grid1/hadoop/yarn/ log,/grid2/hadoop/yarn/log</description>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>###</value>
        <description>URL for job history server</description>
    </property>

    <property>
        <name>yarn.timeline-service.webapp.address</name>
        <value>###</value>
    </property>

    <property>
        <name>yarn.application.classpath</name>
        <value>/data/hadoop-3.3.0/share/hadoop/mapreduce/*,/data/hadoop-3.3.0/share/hadoop/mapreduce/lib/*,/data/hadoop-3.3.0/share/hadoop/common/*,/data/hadoop-3.3.0/share/hadoop/common/lib/*,/data/hadoop-3.3.0/share/hadoop/hdfs/*,/data/hadoop-3.3.0/share/hadoop/hdfs/lib/*,/data/hadoop-3.3.0/share/hadoop/yarn/*,/data/hadoop-3.3.0/share/hadoop/yarn/lib/*</value>
    </property>

</configuration>

It always fails at the final stage... after my MapReduce program has run almost in its entirety. Any ideas greatly appreciated... Running Apache Hadoop 3.3.0

Upvotes: 0

Views: 683

Answers (1)

1218985
1218985

Reputation: 8012

Looks like your Google guava version is either too old (< 20.0) or mismatched (multiple jars versions). Make sure you don't have several versions get loaded into HADOOP_CLASSPATH.

Look for the guava versions by issuing:

find /usr/local/Cellar/hadoop -name guava*.jar -type f
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/yarn/csi/lib/guava-20.0.jar
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/common/lib/guava-27.0-jre.jar
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/hdfs/lib/guava-27.0-jre.jar

If you're using Maven, use:

mvn dependency:tree | less

Upvotes: 1

Related Questions