Reputation: 26149
I am using PyCharm 2018.1 using Python 3.4 with Spark 2.3 installed via pip in a virtualenv. There is no hadoop installation on the local host, so there is no Spark installation (thus no SPARK_HOME, HADOOP_HOME, etc.)
When I try this:
from pyspark import SparkConf
from pyspark import SparkContext
conf = SparkConf()\
.setMaster("local")\
.setAppName("pyspark-unittests")\
.set("spark.sql.parquet.compression.codec", "snappy")
sc = SparkContext(conf = conf)
inputFile = sparkContext.textFile("s3://somebucket/file.csv")
I get:
py4j.protocol.Py4JJavaError: An error occurred while calling o23.partitions.
: java.io.IOException: No FileSystem for scheme: s3
How can I read from s3 while running pyspark in local mode without a complete Hadoop install locally?
FWIW - this works great when I execute it on an EMR node in non-local mode.
The following does not work (same error, although it does resolve and download the dependancies):
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages "org.apache.hadoop:hadoop-aws:3.1.0" pyspark-shell'
from pyspark import SparkConf
from pyspark import SparkContext
conf = SparkConf()\
.setMaster("local")\
.setAppName("pyspark-unittests")\
.set("spark.sql.parquet.compression.codec", "snappy")
sc = SparkContext(conf = conf)
inputFile = sparkContext.textFile("s3://somebucket/file.csv")
Same (bad) results with:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars "/path/to/hadoop-aws-3.1.0.jar" pyspark-shell'
from pyspark import SparkConf
from pyspark import SparkContext
conf = SparkConf()\
.setMaster("local")\
.setAppName("pyspark-unittests")\
.set("spark.sql.parquet.compression.codec", "snappy")
sc = SparkContext(conf = conf)
inputFile = sparkContext.textFile("s3://somebucket/file.csv")
Upvotes: 6
Views: 12735
Reputation: 1929
preparation:
Add following lines to your spark config file, for my local pyspark, it is /usr/local/spark/conf/spark-default.conf
spark.hadoop.fs.s3a.access.key=<your access key>
spark.hadoop.fs.s3a.secret.key=<your secret key>
python file content:
from __future__ import print_function
import os
from pyspark import SparkConf
from pyspark import SparkContext
os.environ["PYSPARK_PYTHON"] = "/usr/bin/python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "/usr/bin/python3"
if __name__ == "__main__":
conf = SparkConf().setAppName("read_s3").setMaster("local[2]")
sc = SparkContext(conf=conf)
my_s3_file3 = sc.textFile("s3a://store-test-1/test-file")
print("file count:", my_s3_file3.count())
commit:
spark-submit --master local \
--packages org.apache.hadoop:hadoop-aws:2.7.3,\
com.amazonaws:aws-java-sdk:1.7.4,\
org.apache.hadoop:hadoop-common:2.7.3 \
<path to the py file above>
Upvotes: 1
Reputation: 146630
So Glennie's answer was close but not what would work in your case. The key thing was to select the right version of the dependencies. If you look at the virtual environment
Everything points to one version which 2.7.3
, which what you also need to use
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages "org.apache.hadoop:hadoop-aws:2.7.3" pyspark-shell'
You should verify the version that your installation using by checking the path venv/Lib/site-packages/pyspark/jars
inside your project's virtual env
And after that you can use s3a
by default or s3
by defining the handler class for the same
# Only needed if you use s3://
sc._jsc.hadoopConfiguration().set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'awsKey')
sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', 'awsSecret')
s3File = sc.textFile("s3a://myrepo/test.csv")
print(s3File.count())
print(s3File.id())
And the output is below
Upvotes: 10
Reputation: 13154
You should use the s3a
protocol when accessing S3 locally. Make sure you add your key and secret to the SparkContext
first. Like this:
sc = SparkContext(conf = conf)
sc._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'awsKey')
sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', 'awsSecret')
inputFile = sparkContext.textFile("s3a://somebucket/file.csv")
Upvotes: 3