c-val
c-val

Reputation: 191

Flink on YARN : Amazon S3 wrongly used instead of HDFS

I followed Flink on YARN's setup documentation. But when I run with ./bin/yarn-session.sh -n 2 -jm 1024 -tm 2048, while being authenticated to Kerberos, I get the following error :

2016-06-16 17:46:47,760 WARN  org.apache.hadoop.util.NativeCodeLoader                       - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-06-16 17:46:48,518 INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl     - Timeline service address: https://**host**:8190/ws/v1/timeline/
2016-06-16 17:46:48,814 INFO  org.apache.flink.yarn.FlinkYarnClient                         - Using values:
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager count = 2
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   JobManager memory = 1024
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager memory = 2048
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
    at java.util.ServiceLoader.fail(ServiceLoader.java:224)
    at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2623)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2634)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
    at org.apache.flink.yarn.FlinkYarnClientBase.deployInternal(FlinkYarnClientBase.java:531)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:342)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:339)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.flink.yarn.FlinkYarnClientBase.deploy(FlinkYarnClientBase.java:339)
    at org.apache.flink.client.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:419)
    at org.apache.flink.client.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:362)
Caused by: java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
    at java.lang.Class.getConstructor0(Class.java:2842)
    at java.lang.Class.newInstance(Class.java:345)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
    ... 18 more
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 23 more

I set the following properties in my ./flink-1.0.3/conf/flink-conf.yaml

fs.hdfs.hadoopconf: /etc/hadoop/conf/
fs.hdfs.hdfssite: /etc/hadoop/conf/hdfs-site.xml

How can I use HDFS instead of Amazon's S3?

Thanks.

Upvotes: 1

Views: 733

Answers (2)

c-val
c-val

Reputation: 191

I actually had to set the env var HADOOP_CLASSPATH as suggested in a deleted answer.

@rmetzger: fs.defaultFS is set.

The resulting command :

HADOOP_CLASSPATH=... ./bin/yarn-session.sh -n 2 -jm 1024 -tm 2048

Upvotes: 1

Robert Metzger
Robert Metzger

Reputation: 4542

I guess the problem is that Flink is not picking up your configuration file.

Can you remove the line starting with fs.hdfs.hdfssite from the configuration. Its not needed if fs.hdfs.hadoopconf is set.

Also, can you check if the setting for fs.defaultFs in core-site.xml is set to something starting with hdfs:// ?

Upvotes: 1

Related Questions