ssgakhal
ssgakhal

Reputation: 398

Cannot connect S3 with Pyspark. Error Message: Bad Request, S3 Extended Request ID: my_extend_request_id

I'm trying to connect s3 with spark, which it is installed on a ec2 cluster. The latter it is composed of a master and two slaves machines, all of them with 6GB of RAM and located in the Central europe (Fankfourt) AWS area. I have installed AWSCLI and configured with my key and exported them into env. I am using pyspark.

I start it using:

pyspark --master spark://my_ip:7077 --executor-memory 1G --packages org.apache.hadoop:hadoop-aws:2.7.3,com.amazonaws:aws-java-sdk-pom:1.11.274,com.databricks:spark-csv_2.10:1.1.0

The packge hadoop-aws: 2.7.3 is the same as the hadoop-common-2.7.3.jar version that is present onto Spark.

Once into pyspark I write the following to set up the s3 configuration:

sc._jsc.hadoopConfiguration (). set ("com.amazonaws.services.s3.enableV4", "true")
sc._jsc.hadoopConfiguration (). set ("fs.s3.awsAccessKeyId", "my_key")
sc._jsc.hadoopConfiguration (). set ("fs.s3.awsSecretAccessKey", "my_secret_key")
sc._jsc.hadoopConfiguration (). set ("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")

I then go to write the following:

bucket = "my_bucket"
textFile = sc.textFile ("s3a: //" + bucket + "/tmp/small_file.csv")
textFile.take(5)

and Python throws me the following error:

Py4JJavaError: An error occurred while calling o36.partitions.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: my_request_id, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: my_extend_request_id
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

Did I miss something?

Upvotes: 2

Views: 1725

Answers (1)

VietD
VietD

Reputation: 11

You may change the config from

sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")

to

sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true")

Upvotes: 1

Related Questions