sag
sag

Reputation: 5451

Connecting S3 from Zeppelin using spark interpreter

I am trying to do some basic analytics using Spark and Zeppelin.

I've set up the spark cluster using the steps present in spark-ec2 Also I've set up the zeppelin in EC2 using the steps present in this blog

I've add the libraries that I want to use using the below code in zeppelin notebook

%dep
z.reset()

// Add spark-csv package
z.load("com.databricks:spark-csv_2.10:1.2.0")

// Add jars required for s3 access
z.load("org.apache.hadoop:hadoop-aws:2.6.0")

And below code is to read CSV files from S3

sc.hadoopConfiguration.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId","XXX")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","XXX")

val path = "s3n://XXX/XXX.csv"
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load(path)

I am getting the below exception

java.lang.VerifyError: Bad type on operand stack Exception Details: 
Location: org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.initialize(Ljava/net/URI;Lorg/apache/hadoop/conf/Configuration;)V @38: invokespecial 
Reason: Type 'org/jets3t/service/security/AWSCredentials' (current frame, stack[3]) is not assignable to 'org/jets3t/service/security/ProviderCredentials' 

Current Frame: bci: @38 flags: { } 
locals: { 'org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore', 'java/net/URI', 'org/apache/hadoop/conf/Configuration', 'org/apache/hadoop/fs/s3/S3Credentials', 'org/jets3t/service/security/AWSCredentials' } 

stack: { 'org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore', uninitialized 32, uninitialized 32, 'org/jets3t/service/security/AWSCredentials' } 
Bytecode: 
0000000: bb00 0259 b700 034e 2d2b 2cb6 0004 bb00 0000010: 0559 2db6 0006 2db6 0007 b700 083a 042a 0000020: bb00 0959 1904 b700 0ab5 000b a700 0b3a 0000030: 042a 1904 b700 0d2a 2c12 0e03 b600 0fb5 0000040: 0010 2a2c 1211 1400 12b6 0014 1400 15b8 0000050: 0017 b500 182a 2c12 1914 0015 b600 1414 0000060: 0015 b800 17b5 001a 2a2c 121b b600 1cb5 0000070: 001d 2abb 001e 592b b600 1fb7 0020 b500 0000080: 21b1 
Exception Handler Table: bci [14, 44] => handler: 47 
Stackmap Table: full_frame(@47,{Object[#191],Object[#192],Object[#193],Object[#194]},{Object[#195]}) same_frame(@55) 
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:334) 
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:324) 
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)

I've looked into How to use Zeppelin to access aws spark-ec2 cluster and s3 buckets As mentioned in the answer I've changed the security and able to connect with Spark. sc.version prints 1.4.0

I've also looked into Why Zeppelin notebook is not able to connect to S3. In the answer it is stated to use local spark which I don't want to do. I want to use the spark cluster present in my EC2 instance.

What step that I miss here?

Upvotes: 3

Views: 8852

Answers (1)

bzz
bzz

Reputation: 663

The error you have happens due to hadoop version mismatch between compiled version inside zeppelin and the one available on your cluster in runtime.

You should check that Zeppelin has been built with the flags, indicating proper version your cluster have.

Or you can try setting HADOOP_HOME env var pointing to the appropriate installation.

Upvotes: 1

Related Questions