Sumit G
Sumit G

Reputation: 456

unable to connect to minio-s3 spark

I am trying to connect to s3 provided by minio using spark But it is saying the bucket minikube does not exists. (created bucket already)

val spark = SparkSession.builder().appName("AliceProcessingTwentyDotTwo")
    .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer").master("local[1]")
    .getOrCreate()
  val sc= spark.sparkContext
  sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
  sc.hadoopConfiguration.set("fs.s3a.endpoint", "http://localhost:9000")
  sc.hadoopConfiguration.set("fs.s3a.access.key", "minioadmin")
  sc.hadoopConfiguration.set("fs.s3a.secret.key", "minioadmin")
  sc.hadoopConfiguration.set("fs.s3`a`.path.style.access", "true")
  sc.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled","false")  
  sc.textFile("""s3a://minikube/data.json""").collect()

I am using the following guide to connect.

https://github.com/minio/cookbook/blob/master/docs/apache-spark-with-minio.md

These are the dependencies I used in scala.

"org.apache.spark" %% "spark-core" % "2.4.0", "org.apache.spark" %% "spark-sql" % "2.4.0", "com.amazonaws" % "aws-java-sdk" % "1.11.712", "org.apache.hadoop" % "hadoop-aws" % "2.7.3",

Upvotes: 0

Views: 5439

Answers (1)

Karan Manchanda
Karan Manchanda

Reputation: 46

Try spark 2.4.3 without hadoop and use Hadoop 2.8.2 or 3.1.2. After trying steps in below link I am able to connect minio using cli

https://www.jitsejan.com/setting-up-spark-with-minio-as-object-storage.html

Upvotes: 2

Related Questions